Why is A(w) - A_R(w) in W_1 in the Triangulization Theorem?

  • Thread starter Thread starter TomMe
  • Start date Start date
  • Tags Tags
    Theorem
TomMe
Messages
51
Reaction score
0
Suppose A: V -> V a linear transformation, with dim(V) = n. If the characteristic polynomial f_{A} of this matrix can be fully split up into linear factors over R, then there exists a basis of V so that the matrix of A is upper triangular.

For me to get to the point I don't understand that well I have to give part of the proof:

So suppose \lambda_{1} is an eigenvalue of A and v_{1} an eigenvector with this eigenvalue. Then W_{1} := <v_{1}> is an A-invariant subspace of V.
Expand v_{1} to a basis v_{1}, v_{2},..,v_{n} of V. Then W_{2}:=<v_{2},..,v_{n}> and V = W_{1} \oplus W_{2}
The matrix of A will be \left( \begin{array}{cc} \lambda_{1}&*\\ 0&R \end{array}\right)

Now suppose A_{R} is the linear transformation of W_{2} with matrix R with regard to basis v_{2},..,v_{n}

The proof now says that for all w \epsilon W_{2}: A(w) - A_{R}(w) \epsilon W_{1}.
I don't understand why this is the case, and I have no idea why this is important for the completion of the proof because it's not explicitly mentioned further on.

Can someone help me out here? Thanks!
 
Physics news on Phys.org
just work it out: what is A-A_r with respect to that basis?
 
Okay..I have to substract 2 transformations then. How do I do that? I know that per definition (f+g)(x) = f(x) + g(x), but what are f(x) and g(x) in this case?

I can see the coordinates of both bases, but they have a different number of coordinate numbers, right? So I can't substract them..

Don't see it. :frown:
 
Oh wait, if I use A_{r} on w \epsilon W_{2}, I have to convert that w to its coordinates with respect to the basis of W_{2}.

Then these n-1 coordinate numbers match the last n-1 coordinate numbers of A(w) with respect to the basis of V, is that right?

So if I substract these coordinates, I get a coordinate vector with only the first coordinate number possibly not zero, so that is an element of W_{1}.

Is that the way to see it?
 
Okay..assuming I'm right, can anyone tell me why this is important for the proof? Because I can't see it.

Proof continued (proof by induction on the dimension):
----------------------
By calculating |tI_{p} - A| using cofactor expansion for the first column we get f_{A}(t) = (t - \lambda_{1}) f_{A_{R}}(t).

Thus f_{A_{R}}(t) also fully splits into linear factors over R (real numbers).

The induction hypothesis, applied on the (p-1)-dimensional vectorspace W_{2} and the linear transormation A_{R}, now gives a basis v_{2}',..,v_{n}' of W_{2} so that the matrix R' of A_{R} with respect to this new basis is upper triangular.

The vectors v_{1},..,v_{n}' now also form a basis of V and the matrix of A with respect to this basis is of the form

\left( \begin{array}{cc} \lambda_{1}&*'\\ 0&R' \end{array}\right)

which is upper triangular. QED
----------------------

Thanks in advance.
 
Back
Top