Can Eigenvalues of Matrix Addition Be Simplified?

In summary: Usually, A and B have different eigenvectors, and (A+B)v will be a different eigenvector from either v. In summary, your proposed approach is invalid.
  • #1
vkillion
2
0
Hello,

I have a linear algebra problem that I need help with.

Basically, I need to get the eigenvalues and eigenvectors of several (sometimes tens of thousands) very large matrices (6^n x 6^n, where n>= 3, to be specific). Currently, we are just using MATLAB's eig() function to get them. I am trying to find optimizations for the simulations to cut down on computing time. There are three matrices that we use.

H_constant - generated before the loop. Real and symmetric about the diagonal. Does not change after initial calculation.

H_location - generated during each iteration. Diagonal.

H_final - H_constant + H_location. Therefore, it is also real and symmetric about the diagonal.

It is H_final that we need the eigenvalues and eigenvectors of. My theory is that we calculate the eigenvalues and eigenvectors of H_constant (which won't change after the initial calculation) once. We use this result with the eigenvalues of H_location (the diagonal), to get the eigenvalues and eigenvectors of H_final1. This would reduce our computation from tens of thousands of eig() calls to 1 eig() call and tens of thousands of very simple operations. I don't remember enough of my linear algebra to prove such a theory.

I hope I was able to explain the problem well enough. I hope someone is able to help me with this problem.

Thank you,

Vincent
 
Physics news on Phys.org
  • #2
The eigenvalues of a sum of matrices C=A+B equal the sum of their eigenvalues, that is, c_n = a_n+b_n, only in the most special of cases. A and B diagonal is one such case. In general your proposed approach is invalid.
 
  • #3
Thank you for your response.

I knew it wouldn't be as easy as adding them together. I wonder though if there isn't a closed-form solution to this, maybe there are some approximations we can make.
 
  • #4
IF A and B have the same eigenvector, v, then [itex](A+ B)v= Av+ Bv= \lambda_A v+ \lambda_B v= (\lambda_A+ \lambda_B) v[/itex]
but that is a very special situation.
 
  • #5


Hello Vincent,

Thank you for reaching out with your linear algebra problem. I understand that you need to find the eigenvalues and eigenvectors of very large matrices, and you are currently using MATLAB's eig() function to do so. You are looking for optimizations to reduce computing time, and you have a theory about using the eigenvalues of H_constant and H_location to get the eigenvalues of H_final.

Your theory is actually correct and can be proven using some basic linear algebra principles. First, let's define what eigenvalues and eigenvectors are. Eigenvalues are the values that, when multiplied by the corresponding eigenvectors, give the same matrix back. In other words, they represent the scaling factor for a particular direction in the matrix. Eigenvectors are the vectors that, when multiplied by the corresponding eigenvalue, give the same vector back. They represent the direction in which the matrix is stretched or compressed.

Now, let's look at your matrices. H_constant is real and symmetric, which means it has real eigenvalues and orthogonal eigenvectors. This makes it easier to compute the eigenvalues and eigenvectors. H_location is diagonal, which means it has the eigenvalues on the diagonal and the eigenvectors are just the standard basis vectors. H_final, being the sum of H_constant and H_location, will have the same eigenvectors as H_constant and the eigenvalues will be the sum of the eigenvalues of H_constant and H_location.

So, your theory of using the eigenvalues of H_constant and H_location to get the eigenvalues of H_final is correct. This will significantly reduce your computation time from tens of thousands of eig() calls to just one eig() call and some simple operations. I would recommend implementing this approach and testing it to see the improvement in computing time.

I hope this helps and good luck with your simulations!

Best,
 

FAQ: Can Eigenvalues of Matrix Addition Be Simplified?

1. What are eigenvalues of matrix addition?

Eigenvalues of matrix addition are the numbers that, when multiplied by a given matrix, result in a scalar multiple of that matrix. They represent the scaling factor of the matrix and are important in various mathematical and scientific fields.

2. How are eigenvalues of matrix addition calculated?

The eigenvalues of matrix addition can be calculated by finding the roots of the characteristic polynomial of the matrix, which is obtained by subtracting the scalar variable from the diagonal elements of the matrix and then taking the determinant of the resulting matrix.

3. What is the significance of eigenvalues of matrix addition?

Eigenvalues of matrix addition play a crucial role in understanding the behavior of linear systems, such as in physics and engineering. They also have important applications in data analysis and machine learning, as they can be used to reduce the dimensionality of a dataset.

4. Can a matrix have multiple eigenvalues of matrix addition?

Yes, a matrix can have multiple eigenvalues of matrix addition, and they can be repeated. For example, a 2x2 identity matrix has two eigenvalues of 1, which are repeated.

5. What is the relationship between eigenvalues of matrix addition and eigenvectors?

Eigenvalues and eigenvectors are closely related. Eigenvectors are the corresponding vectors that, when multiplied by their corresponding eigenvalues, result in the original matrix. They represent the direction in which the matrix is scaled by the eigenvalue.

Back
Top