I guess this is a bit late, but such big problems are typically solved using subspace methods like Davidson iteration[*]. For sizes far beyond 100k rows, it is normally impractical to calculate all eigenvectors, and, in fact, it may be impractical even to compute or store the matrix to eigen-solve itself. For this reason standard black-box methods as found, say, in LAPACK (e.g., based on divide&conquer diagonalization or the multiple relatively robust representations) are often not the method of choice.
In such large cases, often only a few eigenvectors are required, and most practical algorithms are based around the operation r = H * c, i.e., the contraction of the operator to a trial vector "c" (or set of trial vectors) in which the matrix representation of operator H is never needed. The subsequent intermediate results of H * c are then used to iteratively improve the trial solution, usually by using the result to construct a new "trial" vector and adding it to an iterative subspace ("Krylov subspace"), in which the problem is solved exactly. In order to use such methods efficiently, however, additional information about the operator is usually required (e.g., some form of useful preconditioner), so they are not black-box methods.
[*]: (For some reason beyond my comprehension, many people also use the Lanczos algorithm for finding eigenvectors. If you think of doing this: Don't. Use Davidson-Jacobi. It is much superior in every way.)