Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Finding Eigenvectors for Coordinate transformation

  1. Jun 21, 2005 #1


    User Avatar

    I'm wondering if anyone here might have a solution to a problem I've having. This is a Quantum Mechanics problem I'm doing.

    I calculate a 4 by 4 complex Hermitian matrix (H = Hamiltonian) in a basis where it is not diagonal. I diagonalize it numerically (using eispack) and get eigenvalues and eigenvectors, V. With the eigenvector matrix, V, I have verified that the Hamiltonian in the original basis is transformed into a diagonal matrix in the eigenbasis of H. That is,

    D = Vadj H V

    Now I want to move some other Operators (matrices) from my original basis into the eigenbasis of H. For example, I want to transform matrix X into the eigenbasis of H.

    X* = Vadj X V

    Here's my problem. There is not a unique V to transform H into D. That is, I could use

    Vo = V exp(i B)

    where B is a matrix that commutes with D. i.e., BD-DB = 0.

    So I still get

    Voadj H Vo = exp(-iB) Vadj H V exp(iB) = exp(-iB) D exp(iB) = D

    but since XB - BX != 0 (i.e., X and B don't commute), then I get different answers when transforming X into the eigenbasis of H depending on whether my diagonalization routine gives me V or Vo back for the eigenvectors.

    Does anyone know what this problem is called? Does anyone know of someplace I can read more, and possibly find a solution?


  2. jcsd
  3. Jun 22, 2005 #2


    User Avatar
    Science Advisor
    Homework Helper

    In order to preserve the probability space,the theorem of Wigner states that the operators which transform the vectors and observables must be either

    * linear and unitary.
    * antilinear and antiunitary.

    Check your operators obey Wigner's theorem.

  4. Jun 22, 2005 #3
    If the eigenvalues of H are distinct, then it has a unique (up to an arbitrary phase for each eigenvector) set of orthonormal eigenvectors that form a basis for the 4-d space***. In this case, the only difference between V and Vo is that each column in V generally has a different phase from the corresponding column in Vo. Therefore Voadj X Vo = Vadj X V.

    On the other hand, if the eigenvalues E_k of H are not distinct, then the orthonormal eigenbasis is not unique. For example, if the eigenvalue E_j has multiplicity m, then the eigenvectors with eigenvalue E_j form an m-dimensional subspace. If you pick any m linearly independent vectors from this subspace, you can construct from them an orthonormal basis for the subspace using the Gram-Schmidt method. Any vector in this subspace is automatically orthogonal to all eigenvectors corresponding to eigenvalues other than E_j. However, the particular orthonormal basis you construct for the subspace depends on your initial choice of m independent vectors. So in this case of non-distinct eigenvalues, the difference between V and Vo will generally correspond to a different choice in orthonormal bases for each subspace corresponding to a repeated eigenvalue. So some columns of Vo will be linear combinations of columns in V. And that will generally lead to a different transformation of X. So you're right, the result of transforming the matrix of X into an eigenbasis of H will generally depend on which eigenbasis you choose. Even if you decided to stick with one set of basis eigenvectors, the representation of X in that basis would depend on how you numbered them. Of course, the order of the diagonal elements of D would also depend on this numbering, which is a different situation than what we were dealing with above.

    ***A similar statement is true for any nxn matrix A with n distinct eigenvalues...it has a unique set of normalized eigenvectors (not necessarily orthogonal). For example, let u_1, u_2, ..., u_n be eigenvectors of A corresponding to the eigenvalues E_1, E_2, ..., E_n. Since the E_j's are distinct, the u_j's are linearly independent and form a basis for the n-dimensional space. Now suppose a vector "w" is an eigenvector of A corresponding to eigenvalue E_k. We can write w as a linear combination of the basis vectors:

    w = c_1 u_1 + c_2 u_2 + ... + c_n u_n

    Applying A to both sides of this equation, we get

    E_k w = c_1 E_1 u_1 + c_2 E_2 u_2 + ... + c_n E_n u_n

    we also have

    E_k w = c_1 E_k u_1 + c_2 E_k u_2 + ... + c_n E_k u_n .

    Subtracting one from the other, we get

    c_1 (E_k - E_1) u_1 + c_2 (E_k - E_2) u_2 + ...+ c_(k-1) (E_k - E_(k-1)) u_(k-1) + c_(k+1) (E_k - E_(k+1)) u_(k+1) + ... + c_n (E_k - E_n) u_n = 0 .

    Since the u_j's are linearly independent, the coefficients must all be zero. But the eigenvalues are distinct, so this means that all the c_i's are zero except for c_k. In other words, w = c_k u_k, and is thus proportional to the original eigenvector u_k.
    Last edited: Jun 22, 2005
  5. Jun 22, 2005 #4


    User Avatar

    In my case, I do not have degenerate eigenvalues. So, I guess that means my eigenvalues in H are distinct?

    My problem is if H is very close to diagonal before I numerically diagonalize it, I expect the numerical diagonalization to give me a V that is close to the identity matrix. More importantly, I expect the V calculated when H is close to diagonal to not affect X at all. That is,

    Vadj X V -> X

    What I find when H is close to diagonal is that the eigenvectors transform
    X into X plus linear combinations of the commutators of X and B.

    I suppose I could adjust the phase of each column in V so I have ones on the diagonal. That should still give me a V that diagonalizes H.
  6. Jun 22, 2005 #5
    Oops. I goofed when I said
    That equation isn't true.

    You are right about the commutators of X and B appearing in your expression for Voadj X Vo. Here's why. Suppose, like you said, that V is almost the identity matrix. Then we can write it as

    [tex]V = 1 + i \epsilon U[/tex]

    where epsilon is a small parameter. V is unitary, which gives a condition on U:

    [tex]VV^\dagger = (1+i\epsilon U)(1-i\epsilon U^\dagger) = 1 + i \epsilon (U - U^\dagger) + O(\epsilon^2)[/tex] .

    For V to be unitary, we must have that [itex]U = U^\dagger[/itex] to first order in epsilon.


    [tex] V^\dagger X V = X + i\epsilon [X, U] + O(\epsilon^2)[/tex]

    Now if [itex]V_o = V e^{i B}[/itex] where [itex][B, D] = 0[/itex], we get

    [tex] V_o^\dagger X V_o = e^{-iB}Xe^{iB} + i\epsilon e^{-iB}[X, U]e^{iB} + O(\epsilon^2)[/tex]

    But [tex]e^{-iB}Xe^{iB} = X - i[B, X] - \frac{1}{2}[B, [B, X]] + ...[/tex]

    And that expansion is what you get for VoadjXVo to zeroth order in epsilon. Or I guess I should say, I think that is what you get. Does it agree with your result?
  7. Jun 22, 2005 #6


    User Avatar

    Yes, you understand my problem. Your previous post, however, made me realize that the solution is quite simple.

    I just adjust the phase of each eigenvector column in V so its diagonal elements are real. This doesn't affect the diagonalization of H (since it's an unobservable phase), and it puts my numerical V in a form that will match my perturbation expansion for it, namely,

    V = 1 + V(1) + V(2) + ...

    Not only does it give me the correct Ix matrix in the zeroth order limit, but the code seems to converge to the right signal with much larger dt steps.

    I'm happy. :rofl:

    I'm a little surprised I could find any discussion like this elsewhere on the web. Hopefully, this thread will be helpful to someone else in the future.

    Thanks for your help.

    Phil G <- that me - pjg
  8. Jun 22, 2005 #7
    That's great! Good work! So you are writing your own eigensolver?
  9. Jun 22, 2005 #8


    User Avatar

    No, I just translated the EISPACK routines used in CH from fortran into C, and use that. It works well, and I don't have to worry about someone struggling to get the code compiled and linked to libraries.

    Here's the snippet of code I wrote to solve the problem...

    for(r=1;r<=nstate;r++) {
    Ax = V[r][r].r;
    Ay = V[r][r].i;
    norm = sqrt(Ax*Ax + Ay*Ay);
    element.r = Ax/norm;
    element.i = -Ay/norm;
    for(s=1;s<=nstate;s++) {
    V[r].r = element.r*V[r].r - element.i*V[r].i;
    V.i = element.i*V[r].r + element.r*V[r].i;
    Last edited: Jun 23, 2005
  10. Jun 22, 2005 #9
    Okay, cool. I'm not familiar with CH. What is it?
  11. Jun 22, 2005 #10


    User Avatar

  12. Jun 22, 2005 #11
    Thank you! :smile:
  13. Jun 27, 2005 #12


    User Avatar

    There's was a mistake in my previous post. The fix should have been

    for(r=1;r<=nstate;r++) {
    V_r = V[r];
    Ax = V_r[r].r;
    Ay = V_r[r].i;
    norm = sqrt(Ax*Ax + Ay*Ay);
    Ax = Ax/norm;
    Ay = Ay/norm;
    for(s=1;s<=nstate;s++) {
    element.r = V_r.r*Ax + V_r.i*Ay;
    element.i = V_r.i*Ax - V_r.r*Ay;
    V_r.r = element.r;
    V_r.i = element.i;
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Finding Eigenvectors for Coordinate transformation