Why is A(w) - A_R(w) in W_1 in the Triangulization Theorem?

  • Thread starter Thread starter TomMe
  • Start date Start date
  • Tags Tags
    Theorem
TomMe
Messages
51
Reaction score
0
Suppose A: V -> V a linear transformation, with dim(V) = n. If the characteristic polynomial f_{A} of this matrix can be fully split up into linear factors over R, then there exists a basis of V so that the matrix of A is upper triangular.

For me to get to the point I don't understand that well I have to give part of the proof:

So suppose \lambda_{1} is an eigenvalue of A and v_{1} an eigenvector with this eigenvalue. Then W_{1} := <v_{1}> is an A-invariant subspace of V.
Expand v_{1} to a basis v_{1}, v_{2},..,v_{n} of V. Then W_{2}:=<v_{2},..,v_{n}> and V = W_{1} \oplus W_{2}
The matrix of A will be \left( \begin{array}{cc} \lambda_{1}&*\\ 0&R \end{array}\right)

Now suppose A_{R} is the linear transformation of W_{2} with matrix R with regard to basis v_{2},..,v_{n}

The proof now says that for all w \epsilon W_{2}: A(w) - A_{R}(w) \epsilon W_{1}.
I don't understand why this is the case, and I have no idea why this is important for the completion of the proof because it's not explicitly mentioned further on.

Can someone help me out here? Thanks!
 
Physics news on Phys.org
just work it out: what is A-A_r with respect to that basis?
 
Okay..I have to substract 2 transformations then. How do I do that? I know that per definition (f+g)(x) = f(x) + g(x), but what are f(x) and g(x) in this case?

I can see the coordinates of both bases, but they have a different number of coordinate numbers, right? So I can't substract them..

Don't see it. :frown:
 
Oh wait, if I use A_{r} on w \epsilon W_{2}, I have to convert that w to its coordinates with respect to the basis of W_{2}.

Then these n-1 coordinate numbers match the last n-1 coordinate numbers of A(w) with respect to the basis of V, is that right?

So if I substract these coordinates, I get a coordinate vector with only the first coordinate number possibly not zero, so that is an element of W_{1}.

Is that the way to see it?
 
Okay..assuming I'm right, can anyone tell me why this is important for the proof? Because I can't see it.

Proof continued (proof by induction on the dimension):
----------------------
By calculating |tI_{p} - A| using cofactor expansion for the first column we get f_{A}(t) = (t - \lambda_{1}) f_{A_{R}}(t).

Thus f_{A_{R}}(t) also fully splits into linear factors over R (real numbers).

The induction hypothesis, applied on the (p-1)-dimensional vectorspace W_{2} and the linear transormation A_{R}, now gives a basis v_{2}',..,v_{n}' of W_{2} so that the matrix R' of A_{R} with respect to this new basis is upper triangular.

The vectors v_{1},..,v_{n}' now also form a basis of V and the matrix of A with respect to this basis is of the form

\left( \begin{array}{cc} \lambda_{1}&*'\\ 0&R' \end{array}\right)

which is upper triangular. QED
----------------------

Thanks in advance.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top