the basic example of a linear transformation in mathematics is perhaps differentiation.
this is not readily described by a matrix since the space of differentiable functions is infinite dimensional.
however when restricted to a finite dimensional subspace such as polynomials of degree ≤ n, one obtains the basic example of "nilpotent"matrix.
when restricted to a finite dimensional space of functions of form (e^ct)t^k, for 0≤ k ≤ n, one obtains as matrix a basic jordan block.
failure to give this example early on may explain why students of advanced linear algebra find jordan form so strange.
for a presentation of linear algebra including these examples, you may see either the nice book by Insel, Friedberg, and Spence, or my free notes (#7) for math 4050 on my website at UGA.
http://www.math.uga.edu/%7Eroy/4050sum08.pdf
One difference between these two sources is my heavier reliance on the basic concept of the minimal polynomial for a linear transformation. Their book, replete with examples, is also about 4 times as long as mine.
by the way the famous mathematician Emil Artin, argues in his book Geometric Algebra precisely that linear transformations should be the primary object of study and that matrices should be left out almost entirely, except say when a computation needs to be made, say of a determinant. Then he says to immediately throw them out again afterwards. I.e. matrices are entirely a device for representing and computing with linear transformations.
A matrix is given by a pair: (linear transformation of a vector space, basis for that vector space)
equivalently, given a vector space and a linear transformation, matrices for that transformation are equivalent to bases of the vector space. Hence a good choice of basis may be expected give a good matrix.