I like "I like Serena"'s answer, and I would further suggest seeing what A does to
\begin{bmatrix}1 \\ 0 \end{bmatrix} and \begin{bmatrix}0 \\ 1\end{bmatrix}
the basis vectors, and imagining that occurring repeatedly.
However, ftym2011 titled this thread "The question about diagonalization" which makes me think a more general method is intended.
If the matrix, A, is "diagonalizable" then there exist an invertible matrix, P, and a diagonal matrix, D, such that A= PDP^{-1}. Then A^2= (PDP^{-1})(PDP^{-1}= PD^2P^{-1} because the "P^{-1}" and "P" in the middle cancel. And, then A^3= A(A^2)= (PDP^{-1})(PD^2P^{-1})= PD^3P^{-1}. In general, A^n= PD^nP^{-1} and it is easy to find the nth power of a diagonal matrix- it is the diagonal matrix with the nth powers on its main diagonal.
An n by n matrix is diagonalizable if and only if it has n independent eigenvectors. Specifically, D is the diagonal matrix with the eigenvalues of A on its diagonal and P is the matrix whose columns are the corresponding eigenvectors.
So let's put the question back to ftym2011: Can you find the eigenvalues and corresponding eigenvectors of
\begin{bmatrix}1 & -1 \\ 1 & 1\end{bmatrix}?
However, having said that, I note that, since A is anti-symmetric, its eigenvalues and eigenvectors are complex so, again, I think I like Serena's suggestion is best.