Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Eigenvectors under scaling

  1. Sep 6, 2010 #1
    * corresponds to matrix product
    I'm working on a method of visualising graphs, and that method uses eigenvector computations. For a certain square matrix K (the entries of which are result of C_transpose*C, therefore K is symmetric) I have to compute the eigenvectors.
    Since C is mXn, where m>>n, I go with a portion of C (NOT the dot product of the COMPLETE first row of C_transpose with the complete first column of C, but the portion (1/5 perhaps), meaning the dot product of the PORTION of the first row of C_transpose with the portion of the first column of C). In order to get good approximation of the original COMPLETE C_transpose*C, I'm not sure whether I need to multiply each entry of K with 5 (see 1/5 above).
    How would the eigenvectors behave if I do not perform the multiplication with 5?

    In addition, any other suggestion how to approximate C_transpose*C, where C is mXn matrix, with m>>n are very welcome.
    I hope I explained the problem properly.
  2. jcsd
  3. Sep 6, 2010 #2
  4. Sep 7, 2010 #3
    The answer for this: entries of a real symmetric matrix are scaled, the eigenvalues are scaled, but the eigenvectors stay same.
    However, I would really be happy to consider your opinion on this:
    Code (Text):

    any other suggestion how to approximate C_transpose*C, where C is mXn matrix, with m>>n, are very welcome.
  5. Sep 7, 2010 #4
    Let me understand: you want to approximate the dot product of two very long vectors

    [tex]x\cdot y=\sum_{i=1}^nx_iy_i[/tex]

    by a kind of averaged value

    [tex]x\cdot y=5\sum_{i=1}^{n/5}x_iy_i[/tex]

  6. Sep 8, 2010 #5
    Yes, but the goal is to have the resulting approximate output matrix having close eigenvectors to the one I would obtain with complete dot products. Note that 5* is not necessary (yes for the approximation of the matrix obtained with complete dot products, but the resulting eigenvectors stay the same). The method is just part of the sophisticated algorithm which shows acceptable results with this approach.

    However, I would like to hear other opinions, too.
  7. Sep 8, 2010 #6
    Maybe I'm wrong, but it seems to me that the condition m >> n is not necessary for your approach, but only that m is very large. Why should n be small compared to m?
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Threads - Eigenvectors under scaling Date
I Bases, operators and eigenvectors Nov 15, 2017
I Matrix for transforming vector components under rotation Sep 17, 2017
B Elementary eigenvector question Mar 27, 2017