Undergrad If T is diagonalizable then is restriction operator diagonalizable?

Click For Summary
The discussion centers on the diagonalizability of the restriction operator T_W when T is a diagonalizable linear operator on a vector space V. It emphasizes the necessity of W being T-invariant for T_W to be a valid operator from W to W, ensuring that the matrix representation is square. The participants explore the implications of diagonalizability and minimal polynomials, noting that while the minimal polynomial of T_W may equal zero for any subspace W, it is undefined if W is not T-invariant. The conclusion reinforces that the requirement for invariance is crucial for the proper definition and properties of the restriction operator. Understanding these concepts clarifies the conditions under which diagonalizability can be asserted.
CGandC
Messages
326
Reaction score
34
TL;DR
Does minimal polynomial zero out the linear operator restricted to any subspace?
The usual theorem is talking about the linear operator being restricted to an invariant subspace:
Let ##T## be a diagonalizable linear operator on the ##n##-dimensional vector space ##V##, and let ##W## be a subspace of ##V## which is invariant under ##T##. Prove that the restriction operator ##T_W## is diagonalizable.​
I had no problem understanding its proof, it appears here for example: https://math.stackexchange.com/ques...-t-w-is-diagonalizable-if-t-is-diagonalizable However, I had difficulty understanding why we needed the assumption that ## W ## is ##T##-invariant, I mean - If ## m_T(x) ## is the minimal polynomial of ##T## so ## m_T(T)=0 ## and thus for any subspace ## W \subseteq V ## ( not necessarily ## T##-invariant ) ## m_T(T_W) =0 ##; so why in the above theorem it was necessary for ## W \subseteq V ## to be ## T ##-invariant?
 
Physics news on Phys.org
If W is not T-invariant, then the matrix representation of T_W is not square: it must include additional rows to account for the part of T_W(W) which is not in W. In what sense is this non-square matrix "diagonalizable"?

This theory is defined for linear maps T: V \to V where the codomain is the same as the domain, rather than some different space. A restriction of T: V \to V to a subspace W \subset V will only qualify as such a map if we have T_W: W \to W, ie. W is T-invariant.
 
  • Like
Likes WWGD, CGandC and topsquark
pasmith said:
If W is not T-invariant, then the matrix representation of T_W is not square: it must include additional rows to account for the part of T_W(W) which is not in W. In what sense is this non-square matrix "diagonalizable"?

This theory is defined for linear maps T: V \to V where the codomain is the same as the domain, rather than some different space. A restriction of T: V \to V to a subspace W \subset V will only qualify as such a map if we have T_W: W \to W, ie. W is T-invariant.
Although it makes sense that the matrix representation of a diagonalizable operator should be square matrix, I still don't see how this knowledge is necessary for proving the theorem for non-invariant subspace since a linear operator is also called diagonalizable iff its minimal polynomial decomposes to distinct linear factors of multiplicity 1 - this theorem shows the definition of diagonalizability is independent of a matrix representation, thus the minimal polynomial of T restricted to some arbitrary subspace will still divide the minimal polynomial of T itself which is composed of different linear factors of multiplicity 1 thus the minimal polynomial of the restriction will also be composed of different linear factors of multiplicity 1, hence T restricted to some subspace will still be diagonal ( regardless of the subspace's invariance ); so I still don't see how the knowledge that the subspace should be invariant is required to prove the above theorem.
 
Ok, I think I understand fully now what you have said.
The minimal polynomial is defined as the minimal polynomial which zeros out a square matrix.
The minimal polynomial is also defined for linear transformations whose domain is the same as the co-domain ( i.e. ## T: V \to V ## ) as the minimal polynomial which zeros such linear transformation.

So although it is true that ## m_T(T_W) =0 ## for arbitrary subspace ## W \subseteq V ##, it is undefined to talk about a minimal polynomial of ## T_W ## if ## W## is not ##T##-invariant since it isn't true that the domain is equal to the co-domain.

Am I correct?
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
Replies
12
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
12
Views
3K