Can Eigenvalues of Matrix Addition Be Simplified?

Click For Summary
Eigenvalues of matrix addition cannot be simplified in general; they only equal the sum of individual eigenvalues under specific conditions, such as when both matrices are diagonal. The discussion highlights the challenge of efficiently computing eigenvalues and eigenvectors for large matrices, specifically H_final, which is the sum of a constant symmetric matrix and a diagonal matrix. The proposed optimization of leveraging the eigenvalues of H_constant alongside those of H_location is deemed invalid due to the complexities of eigenvalue behavior in matrix addition. While the idea of finding a closed-form solution or making approximations is suggested, it remains a complex issue. Overall, the conversation emphasizes the need for careful consideration in linear algebra computations involving large matrices.
vkillion
Messages
2
Reaction score
0
Hello,

I have a linear algebra problem that I need help with.

Basically, I need to get the eigenvalues and eigenvectors of several (sometimes tens of thousands) very large matrices (6^n x 6^n, where n>= 3, to be specific). Currently, we are just using MATLAB's eig() function to get them. I am trying to find optimizations for the simulations to cut down on computing time. There are three matrices that we use.

H_constant - generated before the loop. Real and symmetric about the diagonal. Does not change after initial calculation.

H_location - generated during each iteration. Diagonal.

H_final - H_constant + H_location. Therefore, it is also real and symmetric about the diagonal.

It is H_final that we need the eigenvalues and eigenvectors of. My theory is that we calculate the eigenvalues and eigenvectors of H_constant (which won't change after the initial calculation) once. We use this result with the eigenvalues of H_location (the diagonal), to get the eigenvalues and eigenvectors of H_final1. This would reduce our computation from tens of thousands of eig() calls to 1 eig() call and tens of thousands of very simple operations. I don't remember enough of my linear algebra to prove such a theory.

I hope I was able to explain the problem well enough. I hope someone is able to help me with this problem.

Thank you,

Vincent
 
Physics news on Phys.org
The eigenvalues of a sum of matrices C=A+B equal the sum of their eigenvalues, that is, c_n = a_n+b_n, only in the most special of cases. A and B diagonal is one such case. In general your proposed approach is invalid.
 
Thank you for your response.

I knew it wouldn't be as easy as adding them together. I wonder though if there isn't a closed-form solution to this, maybe there are some approximations we can make.
 
IF A and B have the same eigenvector, v, then (A+ B)v= Av+ Bv= \lambda_A v+ \lambda_B v= (\lambda_A+ \lambda_B) v
but that is a very special situation.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
9K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K