# Eigenvalues of the product of two matrices

by Hassan2
Tags: eigenvalues matrices
P: 34
 Quote by ajkoer Let A = P'EP Let B = Q'DQ where P'P = Q'Q = I Now: AB= P'EPQ'DQ What we would need for the eigenvalues to be a direct product is PQ' = I and P'Q = I. But (PQ')' = QP' = I = P'Q or Q(P'P) = P'QP = Q or P = I which is a different finding then previously indicated.
CORRECTION:
It is true for each corresponding eigenvalue to be a direct product, one must have PQ' = I implying that PQ'Q = Q or P=Q (as Q'Q =I), which is the prior statement (namely, the eigenvectors must be the same) for each eigenvalue k. However, it is clear that:

Det(AB) = Det(A)*Det(B) = Det(P'EPQ'DQ) = Det(P'PQ'QED) = Det(I*I*ED) = Det(ED) = Det(E)*Det(D)

So, in total, the product of all the eigenvalues of A*B is equal to the product of all the eigenvalues of both A and B, from which one may be able to establish search bounds for individual eigenvalues.

Note, if the Kth eigenvalue is negative, I claim that by multiplying all entries in the Kth (or, it may be some other) row by minus one, the sign of the negative eigenvalue becomes positive (or, remains negative and another eigenvalue becomes negative) as this matrix modification is known to change only the sign of the determinant (note, absolute values of all eigenvalues can, in general, change by this row negation, but their product remains constant). If the matrix can be diagonalized, this sign change can occur only by a change in sign in one (or an odd number) of the eigenvalues. Interestingly, in one matrix product instance even without any sign change operations, with both matrix A and B having positive eigenvalues, the product matrix AB have an even number of negative eigenvalues! This occurrence is apparent without direct knowledge of the eigenvalues through the Characteristic polynomial where a term indicates the negative of the trace of eigenvalue matrix, which noticeably declined in absolute value.

Note, for all positive eigenvalues, the log of the Determinant may serve as a bound to the log of the largest possible eigenvalue. If we known the largest eigenvalue (via the Power Method), then the (log Det - Log Max Ei ) is an upper bound to log of the second largest eigenvalue.
P: 403
 Quote by AlephZero If one of ##A## or ##B## is positive definite, a slightly better idea for small matrices is to do a Cholesky factorization, say ## A = LL^T##, find the eigenpairs of ##L^{-1}BL^{-T}## and then transform the results back to the original problem.
Is there a relation between the eigenvalues/eigenvectors of $L^{-1}AL^{-T}$ and those of $A$?
P: 403
 Quote by ajkoer Note, for all positive eigenvalues, the log of the Determinant may serve as a bound to the log of the largest possible eigenvalue. If we known the largest eigenvalue (via the Power Method), then the (log Det - Log Max Ei ) is an upper bound to log of the second largest eigenvalue.
If the eigenvalues of A and B are all larger than 1, then det(AB) >det(A). That is, the largest eigenvalue of AB has an upper bound larger than that of A. In this case could the largest eigenvalue of AB be smaller than that of A? ( I think it is possible).
 P: 34 Here is a relationship between A = LL' and the diagonal matrix D of eigenvalues of A. Assume there exist a matrix P such that P'P = I and PAP' = D. Then, P(A^2)P' = D^2 or, P(LL'LL')P' = D^2 or, (PL)L'L(PL)' = D^2 Note, (D^-1/2)*(PL)*(L'P')*(D^-1/2) = (D^-1/2)*(PAP')*(D^-1/2) = (D^-1/2)*(D)*(D^-1/2) = I . So, (D^-1/2)*(PL) is the associated eigenvector of the matrix L'L and its eigenvalues equal D.
Engineering
Thanks
P: 6,041
 Quote by Hassan2 Is there a relation between the eigenvalues/eigenvectors of $L^{-1}AL^{-T}$ and those of $A$?
Er, $L^{-1}AL^{-T}$ is the unit matrix!

If you meant
 Is there a relation between the eigenvalues/eigenvectors of $L^{-1}BL^{-T}$ and those of $A$?
The eigenvalues are the same and the eigenvectors are related.
If ## L^{-1} B L^{-T} x = \lambda x ##, let ## y = L^{-T} x##.
## L^{-1} B y = \lambda L^T y ##
## L L^{-1} B y = \lambda L L^T y##
## B y = \lambda A y##.
P: 403
 Quote by AlephZero Er, $L^{-1}AL^{-T}$ is the unit matrix! If you meant The eigenvalues are the same and the eigenvectors are related. If ## L^{-1} B L^{-T} x = \lambda x ##, let ## y = L^{-T} x##. ## L^{-1} B y = \lambda L^T y ## ## L L^{-1} B y = \lambda L L^T y## ## B y = \lambda A y##.
Sorry, I knew that. When I asked the question I had another problem in mind which I explain bellow:

I have a symmetric positive-definite matrix A whose entries are very large and the diagonal elements are the largest. If I want to solve the system of equation $Ax_{k+1}=x_{k}$ (as it is required in most eigenvalue problems) I need to scale my matrix first otherwise the round-off error would grow large. A nice idea is to scale it with the inverse of the diagonal elements but in other to preserve the symmetry, the new equation takes the following form:

$\hat{A}y_{k+1}=y_{k}$

with $\hat{A}=D^{-1}AD^{-1}$ and $y_{k}=D^{-1}x_{k}$

where D is a diagonal matrix with entries equal to square of the diagonal entries of A. Now all the diagonal entries of $\hat{A}$ are "1.0" and the off-diagonal ones have magnitudes less than 1.0.

If there was a relation between the eigenvalues of $A$ and $\hat{A}$ , it would make the job easier. But now, without the relation, in each iteration I have to do a conversion between x and y to find the eigenpairs of A directly.

Another question is about the size of the matrices you deal with. Yo talking about decomposing a matrix and inverting the triangular matrix as if they are possible. But in finite element problems , the matrices are large and sparse. It's very usual to have matrices of order 100,000. I save the nonzero entries on my matrix and instead and of course I can't perform a complete $LL^{T}$ decomposition because the L matrix is not necessarily sparse. Am I missing something here?
 Engineering Sci Advisor Thanks P: 6,041 "Matrix balancing" is a well known technique for improving the behaviour of some non-symmetric eigenproblems, but the usual method is ##DAD^{-1}## not ##D^{-1}AD^{-1}## http://web.eecs.utk.edu/~dongarra/et...s/node205.html http://www.mathworks.co.uk/help/tech...f/balance.html You shouldn't need to do this with symmetric positive definite matrices though, especially if the matrix is diagonally dominant. If A is a sparse matrix, L matrix is also sparse, provided the rows and columns of L are ordered in a sensible way. For example if A is a banded matrix L has the same bandwith as A. Look up "skyline storage" for a fairly simple sparse storage method that is more general than banded matrices. There are many algorithms to reorder the rows and columns of a matrix to reduce the size of L. Modern algorithms treat this as a problem in graph theory and may do a symbolic (non-numerical) decomposition of the matrix to attempt to minimize the amount of fill-in in the L matrix. Two simple, fast, old methods which work pretty well for finite element problems are Grooms and Cuthill-MkCee. You can often get a pretty good ordering for a FE model simply by assembling the elements in a "logical" order working through the structure systematically, rather than using the "random" node numbers produced by a mesh generator. Many FE preprocessors will include a reordering algorithm.

 Related Discussions Linear & Abstract Algebra 16 Linear & Abstract Algebra 5 Linear & Abstract Algebra 1 Linear & Abstract Algebra 6 Calculus & Beyond Homework 5