Eigenvalue, Eigenvector and Eigenspace

negation
Messages
817
Reaction score
0
Let's say my eigenvalue λ=-1 and we assume eigenvector of zero are non-eigenvector.
An eigenspace is mathematically represented as Eλ = N(λ.In-A) which essentially states, in natural language, the eigenspace is the nullspace of a matrix.
N(λ.In-A) is a matrix.

Would it then be valid to say that the eigenspace, Eλ, whose eigenvalue, λ=-1, is the nullspace of the matrix, N(λ.In-A), is equivalent to the the vector , v, where
Av = 0.

If v is the nullspace of the matrix A then Av = 0, and similarly, if Eλ is the nullspace of a matrix, N(λ.In-A), then, it must equally be true that
[ N(λ.In-A) ] [Eλ=-1] = 0
 
Last edited:
Mathematics news on Phys.org
You mean you are trying to construct an eigenspace from a single vector?
 
It is not clear exactly what you mean by
[ N(λ.In-A) ] [Eλ=-1] = 0
Certainly some interpretations would make the statement true, but
[ N(λ.In-A) ] =[Eλ=-1]
is also true (some interpretations)
 
negation said:
Let's say my eigenvalue λ=-1 and we assume eigenvector of zero are non-eigenvector.
"We assume eigenvector of zero are non-eigenvector"? I don't know what this means. Do you mean that you are assuming that 0 is not an eigenvector or are you asserting that an "eigenvector" must not be the 0 vector?

An eigenspace is mathematically represented as Eλ = N(λ.In-A) which essentially states, in natural language, the eigenspace is the nullspace of a matrix.
N(λ.In-A) is a matrix.

Would it then be valid to say that the eigenspace, Eλ, whose eigenvalue, λ=-1, is the nullspace of the matrix, N(λ.In-A), is equivalent to the the vector , v, where
Av = 0.
No. First, it is a set of vectors, not a single vector. Second, it is the set of vectors such that (λ.In-A)v= 0, or equivalently, λ.Inv= Av, as you said, not "Av= 0".

If v is the nullspace of the matrix A then Av = 0, and similarly, if Eλ is the nullspace of a matrix, N(λ.In-A), then, it must equally be true that
[ N(λ.In-A) ] [Eλ=-1] = 0
I can't make sense out of that. You have, as part of your hypothesis, "If v is (in) the nullspace of the matrix A" but you conclusion has no "v" in it.
 
HallsofIvy said:
"We assume eigenvector of zero are non-eigenvector"? I don't know what this means. Do you mean that you are assuming that 0 is not an eigenvector or are you asserting that an "eigenvector" must not be the 0 vector?


No. First, it is a set of vectors, not a single vector. Second, it is the set of vectors such that (λ.In-A)v= 0, or equivalently, λ.Inv= Av, as you said, not "Av= 0".


I can't make sense out of that. You have, as part of your hypothesis, "If v is (in) the nullspace of the matrix A" but you conclusion has no "v" in it.

You probably can't because it doesn't make sense. It's still a new topic to me so I'd figure the fastest way to refine my understanding is to state what I know and have it corrected where the understanding is flawed.

I'm stating that a zero vector is not within the consideration of an eigenvector as I've learnt.

Let's see:

λ is an eigenvalue of the eigenvetor V IFF det(λIn-A) = 0

Say we have a matrix [1 2;4 3] and det((λIn-A) = 0 gives
λ1=5 and λ2 = -1

1) Does this then states that λ1=5 and λ2 = -1 are eigenvalues corresponding to the eigenvector V?

2) Eλ12=Null(λIn-A) is interpreted as the set of vector with eigenvalues of λ1 and λ2 that maps
the matrix (λIn-A) to the zero vector. True?

3) where λ1=-1:

I have Eλ1=N(λIn-A) = N(-1[1 0; 0 1] - [1 2; 4 3])
= N([-2 -2; -4 -4])

4) (I shall take only λ1 as an example. Since Eλ1 maps the matrix [-2 -2; -4 -4] to the zero vector then could it then be written
as AV=0
where A = [1 2; 4 3] and V is the set of vector V = {v1, v2,...vn} that maps the matrix A to the zero vector?

If I could, then, matrix A after RREF(is it a necessarily condition to perform RREF on matrix A? what is the implication if I perform REF instead of RREF?)
becomes [ 1 1; 0 0 ] and V = {v1, v2}
and so,
[ 1 1; 0 0] [v1;v2] = [0;0]
 
negation said:
You probably can't because it doesn't make sense. It's still a new topic to me so I'd figure the fastest way to refine my understanding is to state what I know and have it corrected where the understanding is flawed.

I'm stating that a zero vector is not within the consideration of an eigenvector as I've learnt.
By definition, an eigenvector is nonzero.
negation said:
Let's see:

λ is an eigenvalue of the eigenvetor V IFF det(λIn-A) = 0

Say we have a matrix [1 2;4 3] and det((λIn-A) = 0 gives
λ1=5 and λ2 = -1

1) Does this then states that λ1=5 and λ2 = -1 are eigenvalues corresponding to the eigenvector V?
These are eigenvalues of that matrix. Each eigenvalue has its own eigenvector. For the eigenvalue λ1 = 5, an eigenvector associated with this eigenvalue is any nonzero solution x of the matrix equation Ax = 5x. Equivalently, x is a nonzero solution of (A - 5I)x = 0.
negation said:
2) Eλ12=Null(λIn-A) is interpreted as the set of vector with eigenvalues of λ1 and λ2 that maps
the matrix (λIn-A) to the zero vector. True?
Not sure. This is a pretty cumbersome way to say it.
For a given eigenvalue λ of a matrix A, x is an eigenvector if it is a nonzero solution of the equation Ax = λx.
This is equivalent to (A - λI)x = 0, so x is any nonzero vector in the nullspace of A - λI.
negation said:
3) where λ1=-1:

I have Eλ1=N(λIn-A) = N(-1[1 0; 0 1] - [1 2; 4 3])
= N([-2 -2; -4 -4])


4) (I shall take only λ1 as an example. Since Eλ1 maps the matrix [-2 -2; -4 -4] to the zero vector then could it then be written
as AV=0
where A = [1 2; 4 3] and V is the set of vector V = {v1, v2,...vn} that maps the matrix A to the zero vector?
No. You're not looking for solutions of Ax = 0 - you're looking for solutions of (A - λI)x = 0.
negation said:
If I could, then, matrix A after RREF(is it a necessarily condition to perform RREF on matrix A? what is the implication if I perform REF instead of RREF?)
becomes [ 1 1; 0 0 ] and V = {v1, v2}
and so,
[ 1 1; 0 0] [v1;v2] = [0;0]
 
Mark44 said:
By definition, an eigenvector is nonzero.

I'm going to jump in here- Yes, many textbooks define an eigenvector to be non-zero. But many others define "Eigenvalue" by "\lambda is an eigenvalue for linear operator A if there exist a non-zero vector, v, such that Av= \lambda v" and then define an Eigenvector, corresponding to eigenvalue \lambda, to be any vector, v, such that Av= \lambda v, which includes the 0 vector.

Personally, I prefer that because it allows us to say "the set of all eigenvectors, corresponding
to eigenvalue \lambda, is a subspace" rather than having to say "the set of all eigenvectors, corresponding to eigenvalue \lambda, together with the 0 vector, is a subspace"
 
Back
Top