For me to get to the point I don't understand that well I have to give part of the proof:

So suppose [tex]\lambda_{1}[/tex] is an eigenvalue of A and [tex]v_{1}[/tex] an eigenvector with this eigenvalue. Then [tex]W_{1} := <v_{1}>[/tex] is an A-invariant subspace of V.
Expand [tex]v_{1}[/tex] to a basis [tex]v_{1}, v_{2},..,v_{n}[/tex] of V. Then [tex]W_{2}:=<v_{2},..,v_{n}>[/tex] and [tex]V = W_{1} \oplus W_{2}[/tex]
The matrix of A will be [tex]\left( \begin{array}{cc} \lambda_{1}&*\\ 0&R \end{array}\right)[/tex]

Now suppose [tex]A_{R}[/tex] is the linear transformation of [tex]W_{2}[/tex] with matrix R with regard to basis [tex]v_{2},..,v_{n}[/tex]

The proof now says that for all [tex]w[/tex] [tex]\epsilon[/tex] [tex] W_{2}: A(w) - A_{R}(w)[/tex] [tex]\epsilon[/tex] [tex]W_{1}[/tex].
I don't understand why this is the case, and I have no idea why this is important for the completion of the proof because it's not explicitly mentioned further on.

Okay..I have to substract 2 transformations then. How do I do that? I know that per definition (f+g)(x) = f(x) + g(x), but what are f(x) and g(x) in this case?

I can see the coordinates of both bases, but they have a different number of coordinate numbers, right? So I can't substract them..

Oh wait, if I use [tex]A_{r}[/tex] on [tex]w[/tex] [tex]\epsilon[/tex] [tex]W_{2}[/tex], I have to convert that w to its coordinates with respect to the basis of [tex]W_{2}[/tex].

Then these n-1 coordinate numbers match the last n-1 coordinate numbers of A(w) with respect to the basis of V, is that right?

So if I substract these coordinates, I get a coordinate vector with only the first coordinate number possibly not zero, so that is an element of [tex]W_{1}[/tex].

Okay..assuming I'm right, can anyone tell me why this is important for the proof? Because I can't see it.

Proof continued (proof by induction on the dimension):
----------------------
By calculating [tex]|tI_{p} - A|[/tex] using cofactor expansion for the first column we get [tex]f_{A}(t) = (t - \lambda_{1}) f_{A_{R}}(t)[/tex].

Thus [tex]f_{A_{R}}(t)[/tex] also fully splits into linear factors over R (real numbers).

The induction hypothesis, applied on the (p-1)-dimensional vectorspace [tex]W_{2}[/tex] and the linear transormation [tex]A_{R}[/tex], now gives a basis [tex]v_{2}',..,v_{n}'[/tex] of [tex]W_{2}[/tex] so that the matrix R' of [tex]A_{R}[/tex] with respect to this new basis is upper triangular.

The vectors [tex]v_{1},..,v_{n}'[/tex] now also form a basis of V and the matrix of A with respect to this basis is of the form