We write any linear transformation as a matrix by choosing a specific basis and seeing what the transformation does to the basis vectors. That is, if v1, v1, . . ., v1 is a basis, we can write any vector v= a1v1+ a1v1+ anv1 and write v as the "n-tuple" (a1, a2, . . ., an). Each basis vector is represented by (1, 0, ..., 0), (0, 1, . . ., 0), etc.
Applying the linear transformation to v1 is the same as multiplying the matrix times (1, 0, . . ., 0) which gives whatever the coefficients of that vector are.
Suppose T is a linear transformation, over vector space V, with eigenvalues &lambda1, &lambda2, . . ., &lambdan, having corresponding eigenvectors v1, v1, . . ., v1 which form a basis for V. Applying T to v1 ((represented as (1, 0, . . ., 0)) gives λ1: that would be written (&lamba;1, 0, . . ., 0) so the first column is simply that: (&lamba;1, 0, . . ., 0) . That's why the matrix is a diagonal matrix with the eigenvalues on the diagonal.
The "matrix with eigenvectors as columns" you are talking about is, I think, the "change of basis" matrix. If you have a linear transformation written as a matrix in a given basis and want to change to another basis, then you need to multiply by a "change of basis" matrix which is just the same as applying a linear transformation. It's columns are, again, just the result of applying the transformation to the basis vectors. In particular, the "change of basis" matrix from the basis of eigenvectors to your original take (1, 0,..., 0) to the first eigenvector (written in the orginal basis) and so has first column exactly what that eigenvector is.