I Proof by induction of block diagonal decomposition of a matrix

psie
Messages
315
Reaction score
40
TL;DR Summary
I'm working two exercises, one following the other. The first asks one to prove directly the case ##k=2## of the theorem below. The other is about establishing the theorem by induction for any ##k##. I'm a bit perplexed; how do you prove the theorem by induction? In particular, how do you use the induction hypothesis in a possible proof by induction?
Theorem. Let ##\mathsf T## be a linear operator on a finite dimensional vector space ##\mathsf V##, and let ##\mathsf W_1,\mathsf W_2,\ldots,\mathsf W_k## be ##\mathsf T##-invariant subspaces of ##\mathsf V## such that ##\mathsf V=\mathsf W_1\oplus\mathsf W_2\oplus\cdots\oplus\mathsf W_k##. For each ##i##, let ##\beta_i## be an ordered basis for ##\mathsf W_i##, and let ##\beta=\beta_1\cup\beta_2\cup\cdots\cup\beta_k##. Let ##A=[\mathsf T]_\beta## and ##B_i=[\mathsf T_{\mathsf W_i}]_{\beta_i}## for ##i=1,2,\ldots, k##. Then ##A=B_1\oplus B_2\oplus \cdots\oplus B_k##.

Here ##A=B_1\oplus B_2\oplus \cdots\oplus B_k## means that ##A## is block diagonal with ##B_i## along the diagonal.

A proof that I've seen goes as follows. The ##k=2## case boils down to realizing that if ##v\in\beta_1\subseteq\mathsf W_1##, then ##\mathsf T(v)## is a linear combination of vectors in ##\beta_1## (since ##\mathsf W_1## is ##\mathsf T##-invariant and ##\beta_1## is a basis for this subspace). A similar thing can be said about ##\mathsf T(v)## for any ##v\in\beta_2##. So clearly the matrix representation with respect to ##\beta_1 \cup \beta_2## is block diagonal. I feel like one could go on with this argument for any finite number of subspaces ##k##. How do you prove this by induction? In particular, how do you use the induction hypothesis in a possible proof by induction?
 
Physics news on Phys.org
Are you asking about induction in general or in this particular case?
 
PeroK said:
Are you asking about induction in general or in this particular case?
In this particular case. I'm confused how one would use the induction hypothesis.
 
The case ##k=2## is the induction step. The induction hypothesis is that you have done it for ##k-1## being invariant under ##\beta_1\oplus \ldots\oplus \beta_{k-1}.## Since every semidirect product in the category of vector spaces is already direct, there is not much to add. It's getting interesting in categories in which short-exact sequences do not automatically split.
 
The automatism of a split exact sequence here is equivalent to the invariance under ##T.## Means, the split exact sequences in the category of vector spaces only apply to the choice of bases, i.e. that any quotient ##V/U## can be considered as a subspace again. I was wrong to use this as an argument for the theorem since the invariance of the transformation is crucial here, not just subsapces.

If we have only one ##T##-invariant subspace ##U\subseteq V,## then ##T## is a block-triangular matrix since ##V/U## can have entries in the basis for ##U.## This happens all the time in the categories of groups, rings, algebras, etc. if ##U## is a normal subgroup or an ideal but ##V/U## is not isomorphic to a normal subgroup or ideal again.

In this case, we need the ##T##-invariance of all subspaces to control the off-diagonal entries.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top