Optimizing Eigenvector Computations for Large Matrices

onako
Messages
86
Reaction score
0
* corresponds to matrix product
I'm working on a method of visualising graphs, and that method uses eigenvector computations. For a certain square matrix K (the entries of which are result of C_transpose*C, therefore K is symmetric) I have to compute the eigenvectors.
Since C is mXn, where m>>n, I go with a portion of C (NOT the dot product of the COMPLETE first row of C_transpose with the complete first column of C, but the portion (1/5 perhaps), meaning the dot product of the PORTION of the first row of C_transpose with the portion of the first column of C). In order to get good approximation of the original COMPLETE C_transpose*C, I'm not sure whether I need to multiply each entry of K with 5 (see 1/5 above).
How would the eigenvectors behave if I do not perform the multiplication with 5?

In addition, any other suggestion how to approximate C_transpose*C, where C is mXn matrix, with m>>n are very welcome.
I hope I explained the problem properly.
Thanks
 
Physics news on Phys.org
Anyone?
 
The answer for this: entries of a real symmetric matrix are scaled, the eigenvalues are scaled, but the eigenvectors stay same.
However, I would really be happy to consider your opinion on this:
Code:
any other suggestion how to approximate C_transpose*C, where C is mXn matrix, with m>>n, are very welcome.
 
Let me understand: you want to approximate the dot product of two very long vectors

x\cdot y=\sum_{i=1}^nx_iy_i

by a kind of averaged value

x\cdot y=5\sum_{i=1}^{n/5}x_iy_i

right?
 
Yes, but the goal is to have the resulting approximate output matrix having close eigenvectors to the one I would obtain with complete dot products. Note that 5* is not necessary (yes for the approximation of the matrix obtained with complete dot products, but the resulting eigenvectors stay the same). The method is just part of the sophisticated algorithm which shows acceptable results with this approach.

However, I would like to hear other opinions, too.
 
Maybe I'm wrong, but it seems to me that the condition m >> n is not necessary for your approach, but only that m is very large. Why should n be small compared to m?
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top