Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Vector iteration/Power method for eigen values

  1. May 31, 2012 #1
    I have been reading the Vector iteration/power method for computing the Eigen values and eigen vectors and have got some questions. I shall be grateful if someone can help me.

    Now, in power method we get the dominant eigen value and corresponding eigen vector.

    This is followed by deflation method wherein we compute the reamining Eigen values and eigen vectors.

    To compute the dominant eigen value and corresponding eigen vector we carry out the following steps:

    i. Assign to the candidate matrix an arbitrary eigenvector with at least one element being nonzero.
    ii. Compute a new eigenvector.
    iii. Normalize the eigenvector, where the normalization scalar is taken for an initial eigenvalue.
    iv. Multiply the original matrix by the normalized eigenvector to calculate a new eigenvector.
    v. Normalize this eigenvector, where the normalization scalar is taken for a new eigenvalue.
    vi. Repeat the entire process until the absolute relative error between successive eigenvalues satisfies an arbitrary tolerance (threshold) value.

    My questions here is:

    1) How can it be said that the eigen value is the dominant eigen value? I mean what is the physical interpretation of it being dominant?

    Then in deflation emthod we do the following:

    First, we use the Power Method to find the largest eigenvalue and eigenvector of matrix A.
    a) multiply the largest eigenvector by its transpose and then by the largest eigenvalue. This produces the matrix Z* = c *X*(X)T
    b) compute a new matrix A* = A - Z* = A - c *X*(X)T
    c) Apply the Power Method to A* to compute its largest eigenvalue. This in turns should be the second largest eigenvalue of the initial matrix A.

    Though mathematically easy to do/program these steps but what is it exactly are we doing in a),b) and c). IS it some kind of similarity transformation?

    Please help
  2. jcsd
  3. Jun 1, 2012 #2
    Please can anyone help? I shall be obliged/grateful..

  4. Jun 1, 2012 #3


    User Avatar
    Science Advisor

    Take a simple example. The matrix
    [tex]\begin{bmatrix}3 & 0 \\ 0 & 1\end{bmatrix}[/tex]
    obviously has 3 and 1 as eigenvalues with eigenvectors [1, 0] and [0, 1] respectively.

    The larger eigenvalue is 3 so that is the one the power method would give. Your "cXXT" would be
    [tex]3\begin{bmatrix}1 \\ 0 \end{bmatrix}\begin{bmatrix}1 & 0 \end{bmatrix}= \begin{bmatrix}3 & 0 \\ 0 & 0\end{bmatrix}[/tex]

    So A- 3XXT becomes
    [tex]\begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix}[/tex]

    Essentially, cXXT, with c an eigenvalue and X an eigenvector corresponding to c, is a matrix that has c as eigenvalue with multiples of X as eigenspace but maps all other subspaces to 0. Subtracting, A- cXXT gives a matrix which maps vectors in the eigenspace of c to 0 while mapping other subspaces to the same thing A did. In effect the eigenvalue for that eigenspace is now 0 which is obviously the smallest eigenvalue rather than the largest.
  5. Jun 1, 2012 #4
    Thanks a lot for the reply.

    Now, you said:

    1) When you say "eigen space", does it mean: we have a matrix (say "A") and if it has an eigen value "c" then all the eigen vectors possible for "A" which have an eigen value "c" constitute an "eigen space"- Right? (by strict definition including the zero vector).

    2)I dot follow when you say:

    Does this mean all other vectors which are not eigen vectors of A are mapped to 0? That is, all columns of the matrix are zero except the column(s) which constitutes a vector(s) which has eigen value c?

    Does this mean after doing A- cXXT the resultanant matrix has all columns non zero except those columns which constitute eigen vectors which have some other eigen value than that of 'c'?
  6. Jun 2, 2012 #5


    User Avatar
    Science Advisor

    Yes. A- cXXT maps all eigenvectors of A with eigenvalue c to the 0 vector while mapping an eigenvector, v, with egenvalue [itex]a\ne c[/itex] to av.
  7. Jun 3, 2012 #6
    Thanks a lot again.

    It is said that the power method fails if there is no dominant eigen value. Does it mean that if two eigenvalues are close in absolute value, the power iteration will converge slowly and if two eigen values are similar the method will not converge/will fail?

    Why is it so numerically?
  8. Jun 6, 2012 #7

    Please can anyone help?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook