extract from my book on linear algebra posted here:
The “spectral theorem” (symmetric matrices are diagonalizable)
Theorem: If A is symmetric, then Rn has a basis of eigenvectors for A.
proof: The real valued function f(x) = Ax.x has a maximum on the unit sphere in Rn, at which point the gradient vector of f is zero on the tangent space to the sphere, i.e. is perpendicular to the tangent space at that point. But the tangent space at x is the subspace of vectors perpendicular to x, and the gradient of f at x is the vector 2Ax. Hence Ax is also perpendicular to the tangent space at x, i.e. either Ax is parallel to x or Ax = 0, i.e. x is an eigenvector for A. That gives one eigenvector for A.
Now restrict A to the tangent space (through the origin) to the sphere at x. I.e. let v be a tangent vector, so that v.x = 0. Then Av.x = v.Ax = v.cx for some c. so this is also zero, and hence A preserves this tangent space. Now A still has the property Av.x = v.Ax on this subspace, so A the restriction of A has an eigenvector. Since we are in finite dimensions, by repeating at most n times, A has a basis of eigenvectors. (Note that although the subspace has no natural representation as Rn-1, the argument above for producing an eigenvector was coordinate - free, and depended only the property that Av.v = v.Av, which is still true on the subspace.) QED.
Corollary (of proof): There is actually a basis of mutually perpendicular eigenvectors for a symmetric n by n matrix.
since a matrix whose columns are orthonormal vectors has invrse equalk to its iown transpose, this also diagonalizes the quadratic form, i.e. gives a diagonal matrix under the operation m goes to (P^t)MP.