Vector iteration/Power method for eigen values

svishal03
Messages
124
Reaction score
1
I have been reading the Vector iteration/power method for computing the Eigen values and eigen vectors and have got some questions. I shall be grateful if someone can help me.

Now, in power method we get the dominant eigen value and corresponding eigen vector.

This is followed by deflation method wherein we compute the reamining Eigen values and eigen vectors.

To compute the dominant eigen value and corresponding eigen vector we carry out the following steps:

i. Assign to the candidate matrix an arbitrary eigenvector with at least one element being nonzero.
ii. Compute a new eigenvector.
iii. Normalize the eigenvector, where the normalization scalar is taken for an initial eigenvalue.
iv. Multiply the original matrix by the normalized eigenvector to calculate a new eigenvector.
v. Normalize this eigenvector, where the normalization scalar is taken for a new eigenvalue.
vi. Repeat the entire process until the absolute relative error between successive eigenvalues satisfies an arbitrary tolerance (threshold) value.


My questions here is:

1) How can it be said that the eigen value is the dominant eigen value? I mean what is the physical interpretation of it being dominant?


Then in deflation emthod we do the following:

First, we use the Power Method to find the largest eigenvalue and eigenvector of matrix A.
a) multiply the largest eigenvector by its transpose and then by the largest eigenvalue. This produces the matrix Z* = c *X*(X)T
b) compute a new matrix A* = A - Z* = A - c *X*(X)T
c) Apply the Power Method to A* to compute its largest eigenvalue. This in turns should be the second largest eigenvalue of the initial matrix A.

Though mathematically easy to do/program these steps but what is it exactly are we doing in a),b) and c). IS it some kind of similarity transformation?

Please help
 
Physics news on Phys.org
Please can anyone help? I shall be obliged/grateful..

vishal
 
Take a simple example. The matrix
\begin{bmatrix}3 & 0 \\ 0 & 1\end{bmatrix}
obviously has 3 and 1 as eigenvalues with eigenvectors [1, 0] and [0, 1] respectively.

The larger eigenvalue is 3 so that is the one the power method would give. Your "cXXT" would be
3\begin{bmatrix}1 \\ 0 \end{bmatrix}\begin{bmatrix}1 & 0 \end{bmatrix}= \begin{bmatrix}3 & 0 \\ 0 & 0\end{bmatrix}

So A- 3XXT becomes
\begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix}

Essentially, cXXT, with c an eigenvalue and X an eigenvector corresponding to c, is a matrix that has c as eigenvalue with multiples of X as eigenspace but maps all other subspaces to 0. Subtracting, A- cXXT gives a matrix which maps vectors in the eigenspace of c to 0 while mapping other subspaces to the same thing A did. In effect the eigenvalue for that eigenspace is now 0 which is obviously the smallest eigenvalue rather than the largest.
 
Thanks a lot for the reply.

Now, you said:

Essentially, cXXT, with c an eigenvalue and X an eigenvector corresponding to c, is a matrix that has c as eigenvalue with multiples of X as eigenspace

1) When you say "eigen space", does it mean: we have a matrix (say "A") and if it has an eigen value "c" then all the eigen vectors possible for "A" which have an eigen value "c" constitute an "eigen space"- Right? (by strict definition including the zero vector).

2)I dot follow when you say:

but maps all other subspaces to 0.

Does this mean all other vectors which are not eigen vectors of A are mapped to 0? That is, all columns of the matrix are zero except the column(s) which constitutes a vector(s) which has eigen value c?


Subtracting, A- cXXT gives a matrix which maps vectors in the eigenspace of c to 0 while mapping other subspaces to the same thing A did.

Does this mean after doing A- cXXT the resultanant matrix has all columns non zero except those columns which constitute eigen vectors which have some other eigen value than that of 'c'?
 
Yes. A- cXXT maps all eigenvectors of A with eigenvalue c to the 0 vector while mapping an eigenvector, v, with egenvalue a\ne c to av.
 
Thanks a lot again.

It is said that the power method fails if there is no dominant eigen value. Does it mean that if two eigenvalues are close in absolute value, the power iteration will converge slowly and if two eigen values are similar the method will not converge/will fail?

Why is it so numerically?
 
Hi,

Please can anyone help?
 
Back
Top