Invariant spaces and eigenvector problem

In summary, the conversation discusses how to show that every non-zero vector in a one-dimensional subspace that is A-invariant is an eigenvector of A. The attempt at a solution involves assuming that A is a one by one matrix, but this is incorrect as any n by n matrix can have a one-dimensional invariant subspace. The correct approach involves using the fact that all vectors in the subspace are multiples of each other, and using this to show that the eigenvalues for different vectors in the subspace are the same.
  • #1
gottfried
119
0

Homework Statement


Let W be a 1-dimensional subspace of V that is A-invariant. Show that every non zero vector in W is a eigenvector of A. [A element of Mn(F)]

The Attempt at a Solution



We know W is A-invariant therefore for all w in W A.w is in W. W is one dimensional which implies to me that A must therefore be a one by one matrix with an entry from F. Is this a correct assumption?

If so then A.w=λ.w where λ is an element of F which implies that all w in W are eigenvectors of A.

I'm new to this sort of linear algebra and therefore can't tell if I've made a blatant mistake?
 
Physics news on Phys.org
  • #2
gottfried said:

Homework Statement


Let W be a 1-dimensional subspace of V that is A-invariant. Show that every non zero vector in W is a eigenvector of A. [A element of Mn(F)]

The Attempt at a Solution



We know W is A-invariant therefore for all w in W A.w is in W. W is one dimensional which implies to me that A must therefore be a one by one matrix with an entry from F. Is this a correct assumption?
No, in fact it makes no sense at all. Any n by n matrix can have "1-dimensional invarient subspace".

If so then A.w=λ.w where λ is an element of F which implies that all w in W are eigenvectors of A.
How did you arrive at that? If w is in the invariant subspace, W, then Aw is also in W and, since W is "1-dimensional", it must be a multiple of A. But it does not immediately follow that that multiple is the same for all vectors in W.

Suppose that u and v are different vectors in W. Then we can say that [itex]Au= \lambda_1 u[/itex] and that [itex]Av= \lambda v[/itex] but we cannot yet say that [itex]\lambda_1= \lambda_2[/itex]. To do that, we need to use the fact that W is "1-dimensional" again. Because of that, any vector in the subspace is a mutiple of any other (except 0). We can write u= xv so that Au= x(Av). What does that give you?

I'm new to this sort of linear algebra and therefore can't tell if I've made a blatant mistake?
 
  • #3
Thanks for the help. I'm still a little confused though.

HallsofIvy said:
Suppose that u and v are different vectors in W. Then we can say that [itex]Au= \lambda_1 u[/itex] and that [itex]Av= \lambda v[/itex] but we cannot yet say that [itex]\lambda_1= \lambda_2[/itex].

Why do we need to show that the two λ's are the same. Isn't that [itex] Au= \lambda_1 u[/itex] sufficient because that is the definition of an eigenvector? Unless eigenvalues have to unique for each matrix?
 

1. What are invariant spaces in relation to eigenvectors?

Invariant spaces are subspaces of a vector space that remain unchanged when operated on by a linear transformation. In other words, the vectors within an invariant space may change in magnitude, but their direction remains constant. Eigenvectors are a type of invariant space, as they are transformed by a linear operator only by a scalar factor.

2. How are invariant spaces and eigenvectors used in linear algebra?

Invariant spaces and eigenvectors are important concepts in linear algebra because they provide a way to simplify complex linear transformations. By identifying invariant spaces and their corresponding eigenvectors, we can break down a linear transformation into smaller, simpler operations.

3. What is the difference between an invariant space and an eigenspace?

While both terms refer to subspaces that remain unchanged under a linear transformation, an eigenspace specifically refers to the subspace spanned by all eigenvectors of a particular linear operator. Invariant spaces, on the other hand, can refer to any subspace that is unaffected by a linear transformation.

4. How do you find the invariant spaces of a linear transformation?

To find the invariant spaces of a linear transformation, we can use the eigenvectors and eigenvalues of the transformation. The eigenvectors represent the invariant spaces, and the corresponding eigenvalues tell us how the vectors within those spaces are scaled by the transformation.

5. Can an invariant space have more than one eigenvector?

Yes, an invariant space can have more than one eigenvector. In fact, an invariant space can have infinitely many eigenvectors, as long as they are all linearly independent and correspond to different eigenvalues. This is because an invariant space is defined by the set of all eigenvectors with a given eigenvalue.

Similar threads

  • Calculus and Beyond Homework Help
Replies
8
Views
558
  • Calculus and Beyond Homework Help
Replies
0
Views
418
  • Calculus and Beyond Homework Help
Replies
14
Views
531
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
19
Views
6K
  • Linear and Abstract Algebra
Replies
3
Views
995
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
930
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
Back
Top