I Proof by induction of block diagonal decomposition of a matrix

psie
Messages
315
Reaction score
40
TL;DR Summary
I'm working two exercises, one following the other. The first asks one to prove directly the case ##k=2## of the theorem below. The other is about establishing the theorem by induction for any ##k##. I'm a bit perplexed; how do you prove the theorem by induction? In particular, how do you use the induction hypothesis in a possible proof by induction?
Theorem. Let ##\mathsf T## be a linear operator on a finite dimensional vector space ##\mathsf V##, and let ##\mathsf W_1,\mathsf W_2,\ldots,\mathsf W_k## be ##\mathsf T##-invariant subspaces of ##\mathsf V## such that ##\mathsf V=\mathsf W_1\oplus\mathsf W_2\oplus\cdots\oplus\mathsf W_k##. For each ##i##, let ##\beta_i## be an ordered basis for ##\mathsf W_i##, and let ##\beta=\beta_1\cup\beta_2\cup\cdots\cup\beta_k##. Let ##A=[\mathsf T]_\beta## and ##B_i=[\mathsf T_{\mathsf W_i}]_{\beta_i}## for ##i=1,2,\ldots, k##. Then ##A=B_1\oplus B_2\oplus \cdots\oplus B_k##.

Here ##A=B_1\oplus B_2\oplus \cdots\oplus B_k## means that ##A## is block diagonal with ##B_i## along the diagonal.

A proof that I've seen goes as follows. The ##k=2## case boils down to realizing that if ##v\in\beta_1\subseteq\mathsf W_1##, then ##\mathsf T(v)## is a linear combination of vectors in ##\beta_1## (since ##\mathsf W_1## is ##\mathsf T##-invariant and ##\beta_1## is a basis for this subspace). A similar thing can be said about ##\mathsf T(v)## for any ##v\in\beta_2##. So clearly the matrix representation with respect to ##\beta_1 \cup \beta_2## is block diagonal. I feel like one could go on with this argument for any finite number of subspaces ##k##. How do you prove this by induction? In particular, how do you use the induction hypothesis in a possible proof by induction?
 
Physics news on Phys.org
Are you asking about induction in general or in this particular case?
 
PeroK said:
Are you asking about induction in general or in this particular case?
In this particular case. I'm confused how one would use the induction hypothesis.
 
The case ##k=2## is the induction step. The induction hypothesis is that you have done it for ##k-1## being invariant under ##\beta_1\oplus \ldots\oplus \beta_{k-1}.## Since every semidirect product in the category of vector spaces is already direct, there is not much to add. It's getting interesting in categories in which short-exact sequences do not automatically split.
 
The automatism of a split exact sequence here is equivalent to the invariance under ##T.## Means, the split exact sequences in the category of vector spaces only apply to the choice of bases, i.e. that any quotient ##V/U## can be considered as a subspace again. I was wrong to use this as an argument for the theorem since the invariance of the transformation is crucial here, not just subsapces.

If we have only one ##T##-invariant subspace ##U\subseteq V,## then ##T## is a block-triangular matrix since ##V/U## can have entries in the basis for ##U.## This happens all the time in the categories of groups, rings, algebras, etc. if ##U## is a normal subgroup or an ideal but ##V/U## is not isomorphic to a normal subgroup or ideal again.

In this case, we need the ##T##-invariance of all subspaces to control the off-diagonal entries.
 
Back
Top