Are Eigenvalues and Eigenvectors Correctly Understood in Matrix Operations?

saltine
Messages
84
Reaction score
0

Homework Statement



I am studying about eigenvalues and norms. I was wondering whether the way I understand them is correct.

Homework Equations




The Attempt at a Solution



The eigenvalue of a matrix those that satisfy Ax = \lambda x, where A is a matrix, x is an eigenvector, \lambda is a scalar. The significant here is that from the perspective of the eigenvector, a matrix multiplication with A is the same as a scalar multiplication with lambda. When A is square, non-signular, and of rank n, A has n eigenvalues and n eigenvectors. The n eigenvectors forms a basis.

Suppose vi is an eigenvector with associated eigenvalue \lambda_i. Suppose an arbitrary vector y can be presented as:

y = a v_1 + bv_2

Then

Ay = a\lambda_1v_1 + b\lambda_2v_2

The magnitude (2-norm) of vector y in terms of the eigenvector basis is \sqrt{a^2+b^2}. The magnitude of Ay is \sqrt{(a\lambda_1)^2 + (b\lambda_2)^2}. The gain in magnitude is the ration between the two, which could be a mess to compute. If we check the gain in magnitude over all possible y, we get the 2-norm of matrix A.

Since the 2-norm could be messy to compute. Say we look at the \infty-norm. With the infinity norm, the magnitude of y is the maximum value between a and b, and the magnitude of Ay is the maximum value between a\lambda_1 and b\lambda_2. The gain in magnitude is again the ratio. For each y, the gain would one of the four possiblilities: \lambda_1, \lambda_2, \lambda_1a/b, \lambda_2b/a. In the last two cases, the fraction a/b or b/a must be less than 1, because, it a > b in the first case, then the norm of y would be a, so its gain would have been \lambda_1 instead. Therefore, when all possible vector y are considered, the gain of matrix A must be the maximum between \lambda_1 and \lambda_2.


Topic 2: the eigenvalues of sum of matrices:

For a matrix A, there is a Jordan norm form which is an upper triangular matrix with the eigenvalues of A in its diagonal. A and its Jordan form J by an invertible matrix P in this fashion: AP = PJ. Since det(AP)=det(A)det(P), det(A) = \prod \lambda_i.

Suppose we have matrix A and B. A has Jordan form J such that AP = PJ. B has Jordan form K such that BQ = QK. Then the determinant of A+B is:

det(A + B) = det( PJP-1 + QKQ-1 )

If P happens to equal Q, then:

det(A + B) = det( P( J+K )P-1 ) = \prod (\lambda_j + \lambda_k)

One situation where P can equal Q is when A is the identity matrix. So the matrix M := I+B would have eigenvalues \lambda_m = 1+\lambda_b.

Is this explanation correct?

- Thanks
 
Physics news on Phys.org


saltine said:

Homework Statement



I am studying about eigenvalues and norms. I was wondering whether the way I understand them is correct.

Homework Equations




The Attempt at a Solution



The eigenvalue of a matrix those that satisfy Ax = \lambda x, where A is a matrix, x is an eigenvector, \lambda is a scalar. The significant here is that from the perspective of the eigenvector, a matrix multiplication with A is the same as a scalar multiplication with lambda. When A is square, non-signular, and of rank n, A has n eigenvalues and n eigenvectors. The n eigenvectors forms a basis.
No. A matrix must of course be square in order for Ax= \lambda x to make sense (more generally for Av to be equal to a multiple of v, A must be a linear transformation for a given vectors space to itself) but it is not necessary that a matrix be non-singular. If a matrix is singular that just means that it has at least one eigenvalue equal to 0. Finally, a square, non-singular matrix (and if it is non-singular it is necessarily of rank n) does not in general have n independent eigenvectors and so they do not necessarily form a basis. That is true if and only if the matrix is "diagonalizable", which, for matrices over the real numbers, is true if and only if the matrix is symmetric.
For example, the matrix
\left[\begin{array}{cc}1 & 1 \\ 0 & 1\end{array}\right]
which is not symmetric, has the single eigenvalue 1 and all eigenvectors are multiples of (1, 0).

Suppose vi is an eigenvector with associated eigenvalue \lambda_i. Suppose an arbitrary vector y can be presented as:

y = a v_1 + bv_2

Then

Ay = a\lambda_1v_1 + b\lambda_2v_2

The magnitude (2-norm) of vector y in terms of the eigenvector basis is \sqrt{a^2+b^2}. The magnitude of Ay is \sqrt{(a\lambda_1)^2 + (b\lambda_2)^2}. The gain in magnitude is the ration between the two, which could be a mess to compute. If we check the gain in magnitude over all possible y, we get the 2-norm of matrix A.
By "gain in magnitude" I take it you mean
\frac{||Ay||}{||y||}
and you "get the 2 norm of matrix A" as the supremum of that. It is sufficient to look at vectors with norm 1.

Since the 2-norm could be messy to compute. Say we look at the \infty-norm. With the infinity norm, the magnitude of y is the maximum value between a and b, and the magnitude of Ay is the maximum value between a\lambda_1 and b\lambda_2. The gain in magnitude is again the ratio. For each y, the gain would one of the four possiblilities: \lambda_1, \lambda_2, \lambda_1a/b, \lambda_2b/a. In the last two cases, the fraction a/b or b/a must be less than 1, because, it a > b in the first case, then the norm of y would be a, so its gain would have been \lambda_1 instead. Therefore, when all possible vector y are considered, the gain of matrix A must be the maximum between \lambda_1 and \lambda_2.


Topic 2: the eigenvalues of sum of matrices:

For a matrix A, there is a Jordan norm form which is an upper triangular matrix with the eigenvalues of A in its diagonal. A and its Jordan form J by an invertible matrix P in this fashion: AP = PJ. Since det(AP)=det(A)det(P), det(A) = \prod \lambda_i.
It's a bit more than that. A Jordan form has non-zero entries only on the main diagonal and the diagonal just above that. Each "Jordan block" consists of a single number (an eigenvalue of the matrix) on the main diagonal and either 1 or 0 on the diagonal above.

By the way, if it were true, as you said above, that the eigenvectors of every matrix formed a basis, we wouldn't need the Jordan Form at all! Taking P as the matrix with eigenvectors of A as columns, we would have AP= PD where D is the diagonal matrix with the eigenvalues of A on the main diagonal.

[/quote]Suppose we have matrix A and B. A has Jordan form J such that AP = PJ. B has Jordan form K such that BQ = QK. Then the determinant of A+B is:

det(A + B) = det( PJP-1 + QKQ-1 )

If P happens to equal Q, then:

det(A + B) = det( P( J+K )P-1 ) = \prod (\lambda_j + \lambda_k)

One situation where P can equal Q is when A is the identity matrix. So the matrix M := I+B would have eigenvalues \lambda_m = 1+\lambda_b.

Is this explanation correct?

- Thanks[/QUOTE]
From what you say, it would only follow that the product of the eigenvalues of M is the product of (1+ eignvalues of B), not that the individual eigenvalues are the same.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top