Optimizing Eigenvector Computations for Large Matrices

  • Context: Graduate 
  • Thread starter Thread starter onako
  • Start date Start date
  • Tags Tags
    Eigenvectors Scaling
Click For Summary

Discussion Overview

The discussion revolves around optimizing eigenvector computations for large matrices, specifically focusing on approximating the product of a matrix with its transpose. The context includes theoretical considerations and potential methods for improving computational efficiency in scenarios where the number of rows significantly exceeds the number of columns.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant describes a method for visualizing graphs that involves computing eigenvectors from a symmetric matrix K, derived from a portion of a larger matrix C, where m (rows) is much greater than n (columns).
  • Another participant asserts that while the entries of the symmetric matrix and its eigenvalues are scaled, the eigenvectors remain unchanged, prompting further inquiry into the implications of this scaling on the approximation method.
  • A participant seeks clarification on the approximation method, suggesting that it involves averaging the dot product of two long vectors to achieve a similar result.
  • The original poster acknowledges that while scaling may not be necessary for the resulting eigenvectors, they are interested in additional opinions on the approximation method.
  • One participant questions the necessity of the condition m >> n, suggesting that only a very large m may be sufficient for the proposed approach.

Areas of Agreement / Disagreement

Participants express differing views on the necessity of the condition m >> n and the implications of scaling on eigenvectors. The discussion remains unresolved, with multiple competing perspectives on the effectiveness and requirements of the approximation method.

Contextual Notes

The discussion includes assumptions about the behavior of eigenvectors under scaling and the conditions under which the approximation method is valid. There are unresolved mathematical considerations regarding the implications of the proposed approach.

onako
Messages
86
Reaction score
0
* corresponds to matrix product
I'm working on a method of visualising graphs, and that method uses eigenvector computations. For a certain square matrix K (the entries of which are result of C_transpose*C, therefore K is symmetric) I have to compute the eigenvectors.
Since C is mXn, where m>>n, I go with a portion of C (NOT the dot product of the COMPLETE first row of C_transpose with the complete first column of C, but the portion (1/5 perhaps), meaning the dot product of the PORTION of the first row of C_transpose with the portion of the first column of C). In order to get good approximation of the original COMPLETE C_transpose*C, I'm not sure whether I need to multiply each entry of K with 5 (see 1/5 above).
How would the eigenvectors behave if I do not perform the multiplication with 5?

In addition, any other suggestion how to approximate C_transpose*C, where C is mXn matrix, with m>>n are very welcome.
I hope I explained the problem properly.
Thanks
 
Physics news on Phys.org
Anyone?
 
The answer for this: entries of a real symmetric matrix are scaled, the eigenvalues are scaled, but the eigenvectors stay same.
However, I would really be happy to consider your opinion on this:
Code:
any other suggestion how to approximate C_transpose*C, where C is mXn matrix, with m>>n, are very welcome.
 
Let me understand: you want to approximate the dot product of two very long vectors

x\cdot y=\sum_{i=1}^nx_iy_i

by a kind of averaged value

x\cdot y=5\sum_{i=1}^{n/5}x_iy_i

right?
 
Yes, but the goal is to have the resulting approximate output matrix having close eigenvectors to the one I would obtain with complete dot products. Note that 5* is not necessary (yes for the approximation of the matrix obtained with complete dot products, but the resulting eigenvectors stay the same). The method is just part of the sophisticated algorithm which shows acceptable results with this approach.

However, I would like to hear other opinions, too.
 
Maybe I'm wrong, but it seems to me that the condition m >> n is not necessary for your approach, but only that m is very large. Why should n be small compared to m?
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
Replies
13
Views
4K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K