Can Eigenvalues of Matrix Addition Be Simplified?

vkillion
Messages
2
Reaction score
0
Hello,

I have a linear algebra problem that I need help with.

Basically, I need to get the eigenvalues and eigenvectors of several (sometimes tens of thousands) very large matrices (6^n x 6^n, where n>= 3, to be specific). Currently, we are just using MATLAB's eig() function to get them. I am trying to find optimizations for the simulations to cut down on computing time. There are three matrices that we use.

H_constant - generated before the loop. Real and symmetric about the diagonal. Does not change after initial calculation.

H_location - generated during each iteration. Diagonal.

H_final - H_constant + H_location. Therefore, it is also real and symmetric about the diagonal.

It is H_final that we need the eigenvalues and eigenvectors of. My theory is that we calculate the eigenvalues and eigenvectors of H_constant (which won't change after the initial calculation) once. We use this result with the eigenvalues of H_location (the diagonal), to get the eigenvalues and eigenvectors of H_final1. This would reduce our computation from tens of thousands of eig() calls to 1 eig() call and tens of thousands of very simple operations. I don't remember enough of my linear algebra to prove such a theory.

I hope I was able to explain the problem well enough. I hope someone is able to help me with this problem.

Thank you,

Vincent
 
Physics news on Phys.org
The eigenvalues of a sum of matrices C=A+B equal the sum of their eigenvalues, that is, c_n = a_n+b_n, only in the most special of cases. A and B diagonal is one such case. In general your proposed approach is invalid.
 
Thank you for your response.

I knew it wouldn't be as easy as adding them together. I wonder though if there isn't a closed-form solution to this, maybe there are some approximations we can make.
 
IF A and B have the same eigenvector, v, then (A+ B)v= Av+ Bv= \lambda_A v+ \lambda_B v= (\lambda_A+ \lambda_B) v
but that is a very special situation.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top