Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Proving Hermitian if it has real eigenvalues

  1. Nov 5, 2008 #1
    If you had an operator A-hat whose eigenvectors form a complete basis for the Hilbert space has only real eigenvalue how would you prove that is was Hermitian?
  2. jcsd
  3. Nov 5, 2008 #2


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You wouldn't. That's not even true when restricted to 2x2 matrices whose entries are all real!

    (And please don't post the same thing multiple times. We don't tolerate it on this forum)
  4. Nov 5, 2008 #3
    Could it be you confused the problem with some claim about spectrum? If 2x2 matrix is diagonalizable with real eigenvalues, isn't it quite Hermitian then? And isn't eigenvectors forming a complete basis the same thing as diagonalizability?
  5. Nov 5, 2008 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    A diagonal matrix with real eigenvalues is Hermitian. But not necessarily if the matrix is merely diagonalizable with real eigenvalues. Why should [itex](PDP^{-1})^* = PDP^{-1}[/itex]?
  6. Nov 5, 2008 #5
    I see... so symmetry of a matrix isn't always conserved in coordinate transformations. It would have fixed the problem, if maddogtheman had mentioned his basis to be orthogonal?
  7. Nov 5, 2008 #6
    I assumed the basis to be orthogonal but I think it makes for a rigorous proof.
  8. Nov 5, 2008 #7
    doesn't make*
  9. Nov 5, 2008 #8
    The problem is that [tex]T\mapsto T^{\dagger}[/tex] does not commute with all linear coordinate transformations, so actually we never should speak about a linear mapping being Hermitian. A simple fact, which I had never thought about before. Instead, we should speak about linear mapping being Hermitian with respect to some basis (or with respect to a certain kind of collection of different basis).


    (\psi_i | (T-T^{\dagger})\psi_j)

    with orthogonal eigenvectors [tex]\psi_k[/tex], which satisfy [tex]T\psi_k = \lambda_k \psi_k[/tex].

    (edit: I first wrote complicated instructions, and then edited them simpler)
    Last edited: Nov 5, 2008
  10. Nov 6, 2008 #9
    For example, if

    T = \left(\begin{array}{cc} 1 & 2 \\ 0 & 3 \\ \end{array}\right)

    then the vectors

    u_1 = \left(\begin{array}{c} 1 \\ 0 \\ \end{array}\right),\quad u_2 = \left(\begin{array}{c} 1 \\ 1 \\ \end{array}\right)

    are the eigenvectors with eigenvalues [tex]\lambda_1 = 1[/tex] and [tex]\lambda_2 = 3[/tex]. So the matrix is diagonalizable with real eigenvalues, but it is not Hermitian since [tex]T \neq T^{\dagger}[/tex].

    Lols. :blushing:

    But if we define a new inner product [tex](u_i|u_j)=\delta_{ij}[/tex], and a new conjugate mapping [tex]T\mapsto T^{\dagger}[/tex] by [tex](\psi|T^{\dagger}\phi)=(T\psi|\phi)[/tex] with the new inner product, then [tex]T[/tex] becomes Hermitian.
  11. Nov 6, 2008 #10


    User Avatar
    Homework Helper

    This is because the adjoint of an operator is only defined with respect to an inner product. Specifically, it is the operator [itex]A^\dagger[/itex] such that [itex](u,Av) = (A^\dagger u,v)[/itex] for all vectors u and v. In terms of matrices, as long as we write the coefficients in an orthonormal basis, this just amounts to taking the conjugate transpose. And this will also be true in any other orthonormal basis, where the other orthonormal bases are precisely the ones which can be reached by a unitary transformation (with which the adjoint operation does commute).

    As for the OP, it depends on whether the eigenvectors are orthonormal. If so, then the answer should be easy based on what I just said.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?