Why would you want to use a matrix for a linear transformation? Why not just use the given transformation instead of writing it as a matrix?
If you have a number of rotations to be performed in succession, you can just multiply the matrices. Also you can determine information about a rotation, for example the axis of rotation, by calclulating the eigevectors of the matrix.
Using the transformation or using the matrix is equivalent. You won't lose information if you use the matrix. If you want to keep on using the transformation, then you can do this. But in many cases, using the matrix is simply much easier. Finding eigenvalues for example is much easier with a matrix than with a transformation.
one reason is: a matrix calculation reduces the computation of composition of linear transformations, as well as the computation of image elements under a linear transformation, to arithmatic operations in the underlying field. that is: conceptual--->numerical. sometimes, this is preferrable for getting "actual answers" in a physical application, where some preferred basis (coordinate system) might already be supplied. for example, the differentiation operator is a linear transformation from P_{n}(F) to P_{n}(F). actually "computing a derivative" IS just computing the matrix product [D]_{B}[p]_{B} = [p']_{B}: for n = 2, and F = R, we have for the basis B = {1,x,x^{2}}, that [D]_{B}= [0 1 0] [0 0 2] [0 0 0], or that if p(x) = a + bx + cx^{2}, p'(x) = b + 2cx. of course, this would be just as easy using D(p) = p' using the calculus definition, but it's not so clear what happens if you want to use THIS basis: {1+x,1-x,1-x^{2}}, using the calculus definition, whereas the matrix form makes it transparent.
Is it by using a matrix representation of a derivative that CAS and programmable calculators evaluate derivatives?