psie
- 315
- 40
- TL;DR Summary
- I'm working two exercises, one following the other. The first asks one to prove directly the case ##k=2## of the theorem below. The other is about establishing the theorem by induction for any ##k##. I'm a bit perplexed; how do you prove the theorem by induction? In particular, how do you use the induction hypothesis in a possible proof by induction?
Theorem. Let ##\mathsf T## be a linear operator on a finite dimensional vector space ##\mathsf V##, and let ##\mathsf W_1,\mathsf W_2,\ldots,\mathsf W_k## be ##\mathsf T##-invariant subspaces of ##\mathsf V## such that ##\mathsf V=\mathsf W_1\oplus\mathsf W_2\oplus\cdots\oplus\mathsf W_k##. For each ##i##, let ##\beta_i## be an ordered basis for ##\mathsf W_i##, and let ##\beta=\beta_1\cup\beta_2\cup\cdots\cup\beta_k##. Let ##A=[\mathsf T]_\beta## and ##B_i=[\mathsf T_{\mathsf W_i}]_{\beta_i}## for ##i=1,2,\ldots, k##. Then ##A=B_1\oplus B_2\oplus \cdots\oplus B_k##.
Here ##A=B_1\oplus B_2\oplus \cdots\oplus B_k## means that ##A## is block diagonal with ##B_i## along the diagonal.
A proof that I've seen goes as follows. The ##k=2## case boils down to realizing that if ##v\in\beta_1\subseteq\mathsf W_1##, then ##\mathsf T(v)## is a linear combination of vectors in ##\beta_1## (since ##\mathsf W_1## is ##\mathsf T##-invariant and ##\beta_1## is a basis for this subspace). A similar thing can be said about ##\mathsf T(v)## for any ##v\in\beta_2##. So clearly the matrix representation with respect to ##\beta_1 \cup \beta_2## is block diagonal. I feel like one could go on with this argument for any finite number of subspaces ##k##. How do you prove this by induction? In particular, how do you use the induction hypothesis in a possible proof by induction?