Eigenvalue, Eigenvector and Eigenspace

In summary, the conversation discusses the concept of eigenspaces, which are the nullspaces of matrices, and their relation to eigenvectors and eigenvalues. It is stated that an eigenvector must be nonzero, and for a given eigenvalue, the eigenvector is any nonzero solution to the matrix equation Ax = λx. It is also mentioned that the eigenspace is equivalent to the nullspace of the matrix A - λI. The conversation also delves into the topic of RREF and REF and their implications on eigenvectors and eigenvalues.
  • #1
negation
818
0
Let's say my eigenvalue λ=-1 and we assume eigenvector of zero are non-eigenvector.
An eigenspace is mathematically represented as Eλ = N(λ.In-A) which essentially states, in natural language, the eigenspace is the nullspace of a matrix.
N(λ.In-A) is a matrix.

Would it then be valid to say that the eigenspace, Eλ, whose eigenvalue, λ=-1, is the nullspace of the matrix, N(λ.In-A), is equivalent to the the vector , v, where
Av = 0.

If v is the nullspace of the matrix A then Av = 0, and similarly, if Eλ is the nullspace of a matrix, N(λ.In-A), then, it must equally be true that
[ N(λ.In-A) ] [Eλ=-1] = 0
 
Last edited:
Mathematics news on Phys.org
  • #2
You mean you are trying to construct an eigenspace from a single vector?
 
  • #3
It is not clear exactly what you mean by
[ N(λ.In-A) ] [Eλ=-1] = 0
Certainly some interpretations would make the statement true, but
[ N(λ.In-A) ] =[Eλ=-1]
is also true (some interpretations)
 
  • #4
negation said:
Let's say my eigenvalue λ=-1 and we assume eigenvector of zero are non-eigenvector.
"We assume eigenvector of zero are non-eigenvector"? I don't know what this means. Do you mean that you are assuming that 0 is not an eigenvector or are you asserting that an "eigenvector" must not be the 0 vector?

An eigenspace is mathematically represented as Eλ = N(λ.In-A) which essentially states, in natural language, the eigenspace is the nullspace of a matrix.
N(λ.In-A) is a matrix.

Would it then be valid to say that the eigenspace, Eλ, whose eigenvalue, λ=-1, is the nullspace of the matrix, N(λ.In-A), is equivalent to the the vector , v, where
Av = 0.
No. First, it is a set of vectors, not a single vector. Second, it is the set of vectors such that (λ.In-A)v= 0, or equivalently, λ.Inv= Av, as you said, not "Av= 0".

If v is the nullspace of the matrix A then Av = 0, and similarly, if Eλ is the nullspace of a matrix, N(λ.In-A), then, it must equally be true that
[ N(λ.In-A) ] [Eλ=-1] = 0
I can't make sense out of that. You have, as part of your hypothesis, "If v is (in) the nullspace of the matrix A" but you conclusion has no "v" in it.
 
  • #5
HallsofIvy said:
"We assume eigenvector of zero are non-eigenvector"? I don't know what this means. Do you mean that you are assuming that 0 is not an eigenvector or are you asserting that an "eigenvector" must not be the 0 vector?


No. First, it is a set of vectors, not a single vector. Second, it is the set of vectors such that (λ.In-A)v= 0, or equivalently, λ.Inv= Av, as you said, not "Av= 0".


I can't make sense out of that. You have, as part of your hypothesis, "If v is (in) the nullspace of the matrix A" but you conclusion has no "v" in it.

You probably can't because it doesn't make sense. It's still a new topic to me so I'd figure the fastest way to refine my understanding is to state what I know and have it corrected where the understanding is flawed.

I'm stating that a zero vector is not within the consideration of an eigenvector as I've learnt.

Let's see:

λ is an eigenvalue of the eigenvetor V IFF det(λIn-A) = 0

Say we have a matrix [1 2;4 3] and det((λIn-A) = 0 gives
λ1=5 and λ2 = -1

1) Does this then states that λ1=5 and λ2 = -1 are eigenvalues corresponding to the eigenvector V?

2) Eλ12=Null(λIn-A) is interpreted as the set of vector with eigenvalues of λ1 and λ2 that maps
the matrix (λIn-A) to the zero vector. True?

3) where λ1=-1:

I have Eλ1=N(λIn-A) = N(-1[1 0; 0 1] - [1 2; 4 3])
= N([-2 -2; -4 -4])

4) (I shall take only λ1 as an example. Since Eλ1 maps the matrix [-2 -2; -4 -4] to the zero vector then could it then be written
as AV=0
where A = [1 2; 4 3] and V is the set of vector V = {v1, v2,...vn} that maps the matrix A to the zero vector?

If I could, then, matrix A after RREF(is it a necessarily condition to perform RREF on matrix A? what is the implication if I perform REF instead of RREF?)
becomes [ 1 1; 0 0 ] and V = {v1, v2}
and so,
[ 1 1; 0 0] [v1;v2] = [0;0]
 
  • #6
negation said:
You probably can't because it doesn't make sense. It's still a new topic to me so I'd figure the fastest way to refine my understanding is to state what I know and have it corrected where the understanding is flawed.

I'm stating that a zero vector is not within the consideration of an eigenvector as I've learnt.
By definition, an eigenvector is nonzero.
negation said:
Let's see:

λ is an eigenvalue of the eigenvetor V IFF det(λIn-A) = 0

Say we have a matrix [1 2;4 3] and det((λIn-A) = 0 gives
λ1=5 and λ2 = -1

1) Does this then states that λ1=5 and λ2 = -1 are eigenvalues corresponding to the eigenvector V?
These are eigenvalues of that matrix. Each eigenvalue has its own eigenvector. For the eigenvalue λ1 = 5, an eigenvector associated with this eigenvalue is any nonzero solution x of the matrix equation Ax = 5x. Equivalently, x is a nonzero solution of (A - 5I)x = 0.
negation said:
2) Eλ12=Null(λIn-A) is interpreted as the set of vector with eigenvalues of λ1 and λ2 that maps
the matrix (λIn-A) to the zero vector. True?
Not sure. This is a pretty cumbersome way to say it.
For a given eigenvalue λ of a matrix A, x is an eigenvector if it is a nonzero solution of the equation Ax = λx.
This is equivalent to (A - λI)x = 0, so x is any nonzero vector in the nullspace of A - λI.
negation said:
3) where λ1=-1:

I have Eλ1=N(λIn-A) = N(-1[1 0; 0 1] - [1 2; 4 3])
= N([-2 -2; -4 -4])


4) (I shall take only λ1 as an example. Since Eλ1 maps the matrix [-2 -2; -4 -4] to the zero vector then could it then be written
as AV=0
where A = [1 2; 4 3] and V is the set of vector V = {v1, v2,...vn} that maps the matrix A to the zero vector?
No. You're not looking for solutions of Ax = 0 - you're looking for solutions of (A - λI)x = 0.
negation said:
If I could, then, matrix A after RREF(is it a necessarily condition to perform RREF on matrix A? what is the implication if I perform REF instead of RREF?)
becomes [ 1 1; 0 0 ] and V = {v1, v2}
and so,
[ 1 1; 0 0] [v1;v2] = [0;0]
 
  • #7
Mark44 said:
By definition, an eigenvector is nonzero.

I'm going to jump in here- Yes, many textbooks define an eigenvector to be non-zero. But many others define "Eigenvalue" by "[itex]\lambda[/itex] is an eigenvalue for linear operator A if there exist a non-zero vector, v, such that [itex]Av= \lambda v[/itex]" and then define an Eigenvector, corresponding to eigenvalue [itex]\lambda[/itex], to be any vector, v, such that [itex]Av= \lambda v[/itex], which includes the 0 vector.

Personally, I prefer that because it allows us to say "the set of all eigenvectors, corresponding
to eigenvalue [itex]\lambda[/itex], is a subspace" rather than having to say "the set of all eigenvectors, corresponding to eigenvalue [itex]\lambda[/itex], together with the 0 vector, is a subspace"
 

1. What are eigenvalue, eigenvector, and eigenspace?

Eigenvalue, eigenvector, and eigenspace are concepts in linear algebra that are used to understand the behavior of a linear transformation or matrix. An eigenvalue is a scalar value that represents how the transformation stretches or compresses a vector. An eigenvector is a vector that, when multiplied by the transformation, only changes in length, not direction. The eigenspace is the set of all eigenvectors associated with a particular eigenvalue.

2. How are eigenvalues and eigenvectors calculated?

To calculate eigenvalues and eigenvectors, we need to solve the characteristic equation det(A-λI) = 0, where A is the matrix and λ is the eigenvalue. The solutions to this equation are the eigenvalues. Once we have the eigenvalues, we can plug them back into the equation (A-λI)x = 0 to find the corresponding eigenvectors.

3. What is the significance of eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are important because they provide insights into the behavior of a linear transformation or matrix. They can be used to understand how the transformation stretches, compresses, rotates, or shears a vector. They are also useful for solving systems of linear equations, finding important features in data, and analyzing the stability of dynamic systems.

4. Can a matrix have imaginary eigenvalues?

Yes, a matrix can have imaginary eigenvalues. In fact, if a matrix has complex entries, it will always have at least one complex eigenvalue. This is because the characteristic equation for a matrix with complex entries will have complex solutions. However, the corresponding eigenvectors will always be complex conjugates of each other.

5. How is eigendecomposition used in data analysis?

Eigendecomposition, also known as spectral decomposition, is a method for decomposing a matrix into its eigenvalues and eigenvectors. In data analysis, eigendecomposition is used to transform a dataset into a new coordinate system where the axes are aligned with the directions of maximum variability. This allows for easier visualization and analysis of the data. Additionally, eigendecomposition is used in techniques such as principal component analysis and data compression.

Similar threads

Replies
2
Views
663
  • Linear and Abstract Algebra
Replies
1
Views
764
  • Calculus and Beyond Homework Help
Replies
2
Views
263
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
410
  • Advanced Physics Homework Help
Replies
17
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
18
Views
2K
Replies
5
Views
2K
  • Introductory Physics Homework Help
Replies
1
Views
661
Back
Top