If T is diagonalizable then is restriction operator diagonalizable?

In summary, the conversation discusses the theorem that states that the restriction operator of a diagonalizable linear operator on an invariant subspace is also diagonalizable. While the proof is easily understood, the question arises as to why the assumption of invariance is necessary. It is explained that if the subspace is not invariant, the matrix representation of the restriction operator will not be square and therefore cannot be considered diagonalizable. The discussion also clarifies that this theory is only applicable to linear maps where the domain is the same as the codomain, and a restriction to a subspace only qualifies as such if the subspace is invariant. Finally, the knowledge of the subspace's invariance is necessary to prove the theorem as it is undefined to
  • #1
CGandC
326
34
TL;DR Summary
Does minimal polynomial zero out the linear operator restricted to any subspace?
The usual theorem is talking about the linear operator being restricted to an invariant subspace:
Let ##T## be a diagonalizable linear operator on the ##n##-dimensional vector space ##V##, and let ##W## be a subspace of ##V## which is invariant under ##T##. Prove that the restriction operator ##T_W## is diagonalizable.​
I had no problem understanding its proof, it appears here for example: https://math.stackexchange.com/ques...-t-w-is-diagonalizable-if-t-is-diagonalizable However, I had difficulty understanding why we needed the assumption that ## W ## is ##T##-invariant, I mean - If ## m_T(x) ## is the minimal polynomial of ##T## so ## m_T(T)=0 ## and thus for any subspace ## W \subseteq V ## ( not necessarily ## T##-invariant ) ## m_T(T_W) =0 ##; so why in the above theorem it was necessary for ## W \subseteq V ## to be ## T ##-invariant?
 
Physics news on Phys.org
  • #2
If [itex]W[/itex] is not [itex]T[/itex]-invariant, then the matrix representation of [itex]T_W[/itex] is not square: it must include additional rows to account for the part of [itex]T_W(W)[/itex] which is not in [itex]W[/itex]. In what sense is this non-square matrix "diagonalizable"?

This theory is defined for linear maps [itex]T: V \to V[/itex] where the codomain is the same as the domain, rather than some different space. A restriction of [itex]T: V \to V[/itex] to a subspace [itex]W \subset V[/itex] will only qualify as such a map if we have [itex]T_W: W \to W[/itex], ie. [itex]W[/itex] is [itex]T[/itex]-invariant.
 
  • Like
Likes WWGD, CGandC and topsquark
  • #3
pasmith said:
If [itex]W[/itex] is not [itex]T[/itex]-invariant, then the matrix representation of [itex]T_W[/itex] is not square: it must include additional rows to account for the part of [itex]T_W(W)[/itex] which is not in [itex]W[/itex]. In what sense is this non-square matrix "diagonalizable"?

This theory is defined for linear maps [itex]T: V \to V[/itex] where the codomain is the same as the domain, rather than some different space. A restriction of [itex]T: V \to V[/itex] to a subspace [itex]W \subset V[/itex] will only qualify as such a map if we have [itex]T_W: W \to W[/itex], ie. [itex]W[/itex] is [itex]T[/itex]-invariant.
Although it makes sense that the matrix representation of a diagonalizable operator should be square matrix, I still don't see how this knowledge is necessary for proving the theorem for non-invariant subspace since a linear operator is also called diagonalizable iff its minimal polynomial decomposes to distinct linear factors of multiplicity 1 - this theorem shows the definition of diagonalizability is independent of a matrix representation, thus the minimal polynomial of T restricted to some arbitrary subspace will still divide the minimal polynomial of T itself which is composed of different linear factors of multiplicity 1 thus the minimal polynomial of the restriction will also be composed of different linear factors of multiplicity 1, hence T restricted to some subspace will still be diagonal ( regardless of the subspace's invariance ); so I still don't see how the knowledge that the subspace should be invariant is required to prove the above theorem.
 
  • #4
Ok, I think I understand fully now what you have said.
The minimal polynomial is defined as the minimal polynomial which zeros out a square matrix.
The minimal polynomial is also defined for linear transformations whose domain is the same as the co-domain ( i.e. ## T: V \to V ## ) as the minimal polynomial which zeros such linear transformation.

So although it is true that ## m_T(T_W) =0 ## for arbitrary subspace ## W \subseteq V ##, it is undefined to talk about a minimal polynomial of ## T_W ## if ## W## is not ##T##-invariant since it isn't true that the domain is equal to the co-domain.

Am I correct?
 

1. What is a diagonalizable operator?

A diagonalizable operator is an operator that can be represented by a diagonal matrix in some basis. This means that the operator's action on vectors can be simplified to a multiplication by a scalar on each coordinate.

2. What is the restriction operator?

The restriction operator is an operator that takes a subspace of a vector space and restricts the action of another operator to that subspace. It can be thought of as a projection onto the subspace.

3. How do you determine if an operator is diagonalizable?

An operator is diagonalizable if it has a basis of eigenvectors. This means that there exists a basis for the vector space such that the operator's action on each basis vector is simply a scalar multiplication.

4. What is the relationship between diagonalizable operators and restriction operators?

If an operator T is diagonalizable, then its restriction operator to a subspace is also diagonalizable. This is because the restriction operator can be represented by a diagonal matrix in the basis of eigenvectors of T.

5. Can a restriction operator be diagonalizable if its parent operator is not?

Yes, it is possible for a restriction operator to be diagonalizable even if its parent operator is not diagonalizable. This is because the restriction operator can have a different set of eigenvectors than its parent operator, leading to a different diagonal matrix representation.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
0
Views
450
Replies
12
Views
3K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
460
Replies
1
Views
1K
Back
Top