# Find the Eigenvalues and eigenvectors of 3x3 matrix

• I
• Michael_0039
In summary, the table A has three elements, each of which is a sum of two other elements. The first two equations state that these sums are equal, and the third equation states that these sums are equal if and only if the three elements are linearly independent.f

#### Michael_0039

Assume a table A(3x3) with the following:

A [ 1 2 1 ]^T = 6 [ 1 2 1 ]^T
A [ 1 -1 1 ]^T = 3 [ 1 -1 1 ]^T
A [ 2 -1 0]^T = 3 [ 1 -1 1]^T

Find the Eigenvalues and eigenvectors:

I have in mind to start with the Av=λv or det(A-λI)v=0....

Also, the first 2 equations seems to have the form Av=λv:
So maybe, u1= [ 1 2 1 ] with λ=6 and u2= [1 -1 1] with λ=3 .......but what about 3rd ?

Why don't you get A explicitly at first ?

Why don't you get A explicitly at first ?
Matrix A is not given, only those equations.

Those equations are equivalent to a product relation of 3X3 matrices say
$$AB=C$$
So explicitly
$$A=CB^{-1}$$
...But I see getting explicit A is not a straight forward way

Last edited:
Assume a table A(3x3) with the following:

A [ 1 2 1 ]^T = 6 [ 1 2 1 ]^T
A [ 1 -1 1 ]^T = 3 [ 1 -1 1 ]^T
A [ 2 -1 0]^T = 3 [ 1 -1 1]^T

Find the Eigenvalues and eigenvectors:

I have in mind to start with the Av=λv or det(A-λI)v=0....

Also, the first 2 equations seems to have the form Av=λv:
So maybe, u1= [ 1 2 1 ] with λ=6 and u2= [1 -1 1] with λ=3 .......but what about 3rd ?
Correct.

The second and third means, that the eigenspace to the eigenvalue ##3## has dimension two. It is spanned by ##(1,-1,1)^\tau## and ##(2,-1,0)^\tau.##

So the answer will be:
λ1=6 with u1=[ 1 2 1 ]T
λ2=3 with u2=[ 1 -1 1 ]T and u3=[ 2 -1 0 ]T

(?)

So the answer will be:
λ1=6 with u1=[ 1 2 1 ]T
λ2=3 with u2=[ 1 -1 1 ]T and u3=[ 2 -1 0 ]T

(?)
Yes.

I mean, almost.
All linear combinations of ##(1,2,1)## (i.e. all multiples of) are eigenvectors to the eigenvalue ##6,## too.
All linear combinations of ##(1,-1,1)## and ##(2,-1,0)## are eigenvectors to the eigenvalue ##3,## too.

Michael_0039
@Michael_0039 note my edit. You've been too fast.

If ##v_1,\ldots,v_n## are all eigenvectors to the same eigenvalue ##\lambda ## then
$$A\left(\sum_{k=1}^n \alpha_k v_k\right)=\sum_{k=1}^n\alpha_k A(v_k)=\sum_{k=1}^n\alpha_k \lambda v_k=\lambda \cdot \left(\sum_{k=1}^n\alpha_k v_k\right)$$
all linear combinations are eigenvectors to the eigenvalue ##\lambda, ## too.

So for the 2nd question: Is A reversible or/and diagonal ?
The answer if A-1 exists is λ12≠0 and is non-diagonal because λ2 has reached one time but has 2 eigenvectors (?)

So for the 22nd question: Is A reversible ...
What do you think? Hint: not reversible implies not injective. Now, what does not injective mean?

... or/and diagonal
Are those three vectors linearly independent? If so, they will be a basis. How does ##A## look like according to that basis?

A [ 1 -1 1 ]^T = 3 [ 1 -1 1 ]^T
A [ 2 -1 0]^T = 3 [ 1 -1 1]^T
By subtraction I observe
$$A[-1,0,1]^T=[0,0,0]^T=0[-1,0,1]^T$$
which suggests ##\lambda=0,3,6## in all.

Michael_0039
Correct.

The second and third means, that the eigenspace to the eigenvalue ##3## has dimension two. It is spanned by ##(1,-1,1)^\tau## and ##(2,-1,0)^\tau.##

The second and third equations tell us that $(2, -1, 0)^T - (1, -1, 1)^T = (1,0,-1)^T \in \ker A$.

The second and third equations tell us that $(2, -1, 0)^T - (1, -1, 1)^T = (1,0,-1)^T \in \ker A$.
Thank you. My mistake! I mistook it for ##A(2,-1,0)=(2,-1,0).##

@Michael_0039 Forget my posts. I was thinking about a different situation.