multiplying (square) matrices is complicated, we have n2 inner products of rows and columns to consider, which is:
n3 + n arithmetical operations in all (n products in each inner product, plus a summation, times n2).
multiplying diagonal matrices is much simpler, the resulting product is ALSO diagonal, and requires only n operations:
diag{a1,...,an}*diag{b1,...,bn} = diag{a1b1,...,anbn}
even when n is small (like say n = 4), this is a tremendous savings of calculational effort (we only have 4 steps of arithmetic, rather than 68).
it also making calculating the determinant MUCH more tractable: the determinant is invariant under a similarity transform. for an nxn matrix, normally calculating it requires computing n! n-fold products and then summing these, whereas computing the determinant of a diagonal matrix requires just computing ONE n-fold product.
for example, computing a 5x5 determinant requires 121 arithmetical operations (even determining which 120 5-fold products to compute is tedious), whereas computing a 5x5 diagonal matrix's determinant can often be done in your head.
morevoer, if A is diagonalizable, diagonalizing A illustrates a deep connection between the diagonalized matrix and the eigenvalues of A, and the diagonalizing matrix P and the eigenvectors of A (and since P is invertible, that the eigenvectors form an eigenbasis).
the "catch" here is that not all matrices ARE diagonalizable. it turns out, however, that we can at least "semi-diagonalize" A into the sum:
D + N, where D is diagonal, and N is nilpotent.
this shows how important understanding nilpotent linear transformations is to "getting a good picture of bad matrices" (the diagonalizable ones being "good matrices").
if a matrix function can be represented as a power series (such as in the exponential example Ackbach gives), then computing the matrix function becomes a LOT easier if our matrix is diagonalizable.
unfortunately, the set of diagonalizable matrices isn't closed under matrix addition, which is a darn shame.