OK, so suppose you have a matrix A which you want to diagonalize. You want to build the transformation matrix P, s.t. A = P^{-1} D P (or was the inverse on the other one... I don't recall, and it doesn't really matter). Here D is a diagonal matrix, and the values on the diagonal are the eigenvalues of A. The transformation matrix P contains as its columns exactly the eigenvectors of A. So if A has eigenvalues \lambda_1, \lambda_2, \cdots, \lambda_n with eigenvectors \vec v_1, \vec v_2, \cdots, \vec v_n (both not necessarily distinct) then
P = \begin{pmatrix} (v_1)_1 & (v_2)_1 & \cdots & (v_n)_1 \\ (v_1)_2 & (v_2)_2 & \cdots & (v_n)_2 \\ \vdots & \vdots & \ddots & \vdots \\ (v_1)_n & (v_2)_n & \cdots & (v_n)_n \end{pmatrix}; \qquad<br />
D = \begin{pmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{pmatrix}
agree? And A is diagonalizable if you can write it this way, that is: if P is invertible. This is what you already said when you said "and if ill get a line of zeros
then its not diagonalasabel" : if you get a line of zeros, then the rank of the matrix is smaller than n (if you get a line of zeros, it means the row space has dim < n). Equivalently, the column space has dimension < n: two columns are linearly dependent. This means exactly that two (or more) of the eigenvectors v_1, \cdots, v_n are linearly dependent. Now of course, they cannot be linearly dependent and still be eigenvectors for different eigenvalues, so this can only happen if there are eigenvalues which occur more than once. So if all eigenvalues are distinct, A will certainly be diagonalizable. If an eigenvalue does occur (say) twice, you must calculate the eigenvectors and see if they are distinct.
I hope you see that the two problems are equivalent. What you want to do when you say you explicitly want to write down the transformation matrix, is calculate all eigenvectors and put them as columns in this matrix. What HallsOfIvy told you is sort of a shortcut: if all eigenvalues are distinct, then you know already that you are not going to get a row of zeros in P if you calculate them and put them as columns in P. But if two eigenvalues are the same (which happens when \alpha = 1, 2) then it can happen that you get two linearly dependent eigenvectors, aka a non-maximal rank transformation matrix, aka a non-invertible transformation matrix, aka a transformation matrix which will give a row of zeros when row-reduces; so in these cases you should explicitly calculate those eigenvectors. Of course, if you also calculate those for the other two eigenvalues (1 and 2) you have all the eigenvectors and you can still explicitly write down the transformation matrix, although it is not necessary.