
#19
Apr1612, 01:14 PM

P: 34

It is true for each corresponding eigenvalue to be a direct product, one must have PQ' = I implying that PQ'Q = Q or P=Q (as Q'Q =I), which is the prior statement (namely, the eigenvectors must be the same) for each eigenvalue k. However, it is clear that: Det(AB) = Det(A)*Det(B) = Det(P'EPQ'DQ) = Det(P'PQ'QED) = Det(I*I*ED) = Det(ED) = Det(E)*Det(D) So, in total, the product of all the eigenvalues of A*B is equal to the product of all the eigenvalues of both A and B, from which one may be able to establish search bounds for individual eigenvalues. Note, if the Kth eigenvalue is negative, I claim that by multiplying all entries in the Kth (or, it may be some other) row by minus one, the sign of the negative eigenvalue becomes positive (or, remains negative and another eigenvalue becomes negative) as this matrix modification is known to change only the sign of the determinant (note, absolute values of all eigenvalues can, in general, change by this row negation, but their product remains constant). If the matrix can be diagonalized, this sign change can occur only by a change in sign in one (or an odd number) of the eigenvalues. Interestingly, in one matrix product instance even without any sign change operations, with both matrix A and B having positive eigenvalues, the product matrix AB have an even number of negative eigenvalues! This occurrence is apparent without direct knowledge of the eigenvalues through the Characteristic polynomial where a term indicates the negative of the trace of eigenvalue matrix, which noticeably declined in absolute value. Note, for all positive eigenvalues, the log of the Determinant may serve as a bound to the log of the largest possible eigenvalue. If we known the largest eigenvalue (via the Power Method), then the (log Det  Log Max Ei ) is an upper bound to log of the second largest eigenvalue. 



#20
Apr1712, 06:57 AM

P: 404





#21
Apr1712, 07:25 AM

P: 404





#22
Apr1712, 10:51 AM

P: 34

Here is a relationship between A = LL' and the diagonal matrix D of eigenvalues of A.
Assume there exist a matrix P such that P'P = I and PAP' = D. Then, P(A^2)P' = D^2 or, P(LL'LL')P' = D^2 or, (PL)L'L(PL)' = D^2 Note, (D^1/2)*(PL)*(L'P')*(D^1/2) = (D^1/2)*(PAP')*(D^1/2) = (D^1/2)*(D)*(D^1/2) = I . So, (D^1/2)*(PL) is the associated eigenvector of the matrix L'L and its eigenvalues equal D. 



#23
Apr1712, 12:37 PM

Engineering
Sci Advisor
HW Helper
Thanks
P: 6,339

If you meant If ## L^{1} B L^{T} x = \lambda x ##, let ## y = L^{T} x##. ## L^{1} B y = \lambda L^T y ## ## L L^{1} B y = \lambda L L^T y## ## B y = \lambda A y##. 



#24
Apr1712, 09:04 PM

P: 404

I have a symmetric positivedefinite matrix A whose entries are very large and the diagonal elements are the largest. If I want to solve the system of equation [itex]Ax_{k+1}=x_{k}[/itex] (as it is required in most eigenvalue problems) I need to scale my matrix first otherwise the roundoff error would grow large. A nice idea is to scale it with the inverse of the diagonal elements but in other to preserve the symmetry, the new equation takes the following form: [itex]\hat{A}y_{k+1}=y_{k}[/itex] with [itex]\hat{A}=D^{1}AD^{1}[/itex] and [itex]y_{k}=D^{1}x_{k}[/itex] where D is a diagonal matrix with entries equal to square of the diagonal entries of A. Now all the diagonal entries of [itex]\hat{A}[/itex] are "1.0" and the offdiagonal ones have magnitudes less than 1.0. If there was a relation between the eigenvalues of [itex]A[/itex] and [itex]\hat{A}[/itex] , it would make the job easier. But now, without the relation, in each iteration I have to do a conversion between x and y to find the eigenpairs of A directly. Another question is about the size of the matrices you deal with. Yo talking about decomposing a matrix and inverting the triangular matrix as if they are possible. But in finite element problems , the matrices are large and sparse. It's very usual to have matrices of order 100,000. I save the nonzero entries on my matrix and instead and of course I can't perform a complete [itex]LL^{T}[/itex] decomposition because the L matrix is not necessarily sparse. Am I missing something here? 



#25
Apr1812, 09:22 AM

Engineering
Sci Advisor
HW Helper
Thanks
P: 6,339

"Matrix balancing" is a well known technique for improving the behaviour of some nonsymmetric eigenproblems, but the usual method is ##DAD^{1}## not ##D^{1}AD^{1}##
http://web.eecs.utk.edu/~dongarra/et...s/node205.html http://www.mathworks.co.uk/help/tech...f/balance.html You shouldn't need to do this with symmetric positive definite matrices though, especially if the matrix is diagonally dominant. If A is a sparse matrix, L matrix is also sparse, provided the rows and columns of L are ordered in a sensible way. For example if A is a banded matrix L has the same bandwith as A. Look up "skyline storage" for a fairly simple sparse storage method that is more general than banded matrices. There are many algorithms to reorder the rows and columns of a matrix to reduce the size of L. Modern algorithms treat this as a problem in graph theory and may do a symbolic (nonnumerical) decomposition of the matrix to attempt to minimize the amount of fillin in the L matrix. Two simple, fast, old methods which work pretty well for finite element problems are Grooms and CuthillMkCee. You can often get a pretty good ordering for a FE model simply by assembling the elements in a "logical" order working through the structure systematically, rather than using the "random" node numbers produced by a mesh generator. Many FE preprocessors will include a reordering algorithm. 


Register to reply 
Related Discussions  
2 matrices Same eigenvalues.  Linear & Abstract Algebra  16  
Matrices with all zero eigenvalues  Linear & Abstract Algebra  5  
Eigenvalues for matrices proof  Linear & Abstract Algebra  1  
Diagonalizable Matrices & Eigenvalues!  Linear & Abstract Algebra  6  
Eigenvalues & Similar Matrices  Calculus & Beyond Homework  5 