Finding Eigenvectors for Coordinate transformation

In summary: E_k...must be zero. So we end up with a single eigenvector, w = c_1 E_k u_1 + c_2 E_k u_2 + ... + c_n E_k u_n, that is orthogonal to all the other eigenvectors.
  • #1
pjg
6
0
I'm wondering if anyone here might have a solution to a problem I've having. This is a Quantum Mechanics problem I'm doing.

I calculate a 4 by 4 complex Hermitian matrix (H = Hamiltonian) in a basis where it is not diagonal. I diagonalize it numerically (using eispack) and get eigenvalues and eigenvectors, V. With the eigenvector matrix, V, I have verified that the Hamiltonian in the original basis is transformed into a diagonal matrix in the eigenbasis of H. That is,

D = Vadj H V

Now I want to move some other Operators (matrices) from my original basis into the eigenbasis of H. For example, I want to transform matrix X into the eigenbasis of H.

X* = Vadj X V

Here's my problem. There is not a unique V to transform H into D. That is, I could use

Vo = V exp(i B)

where B is a matrix that commutes with D. i.e., BD-DB = 0.

So I still get

Voadj H Vo = exp(-iB) Vadj H V exp(iB) = exp(-iB) D exp(iB) = D

but since XB - BX != 0 (i.e., X and B don't commute), then I get different answers when transforming X into the eigenbasis of H depending on whether my diagonalization routine gives me V or Vo back for the eigenvectors.

Does anyone know what this problem is called? Does anyone know of someplace I can read more, and possibly find a solution?

Thanks,

Philip
 
Physics news on Phys.org
  • #2
In order to preserve the probability space,the theorem of Wigner states that the operators which transform the vectors and observables must be either

* linear and unitary.
* antilinear and antiunitary.

Check your operators obey Wigner's theorem.

Daniel.
 
  • #3
If the eigenvalues of H are distinct, then it has a unique (up to an arbitrary phase for each eigenvector) set of orthonormal eigenvectors that form a basis for the 4-d space***. In this case, the only difference between V and Vo is that each column in V generally has a different phase from the corresponding column in Vo. Therefore Voadj X Vo = Vadj X V.

On the other hand, if the eigenvalues E_k of H are not distinct, then the orthonormal eigenbasis is not unique. For example, if the eigenvalue E_j has multiplicity m, then the eigenvectors with eigenvalue E_j form an m-dimensional subspace. If you pick any m linearly independent vectors from this subspace, you can construct from them an orthonormal basis for the subspace using the Gram-Schmidt method. Any vector in this subspace is automatically orthogonal to all eigenvectors corresponding to eigenvalues other than E_j. However, the particular orthonormal basis you construct for the subspace depends on your initial choice of m independent vectors. So in this case of non-distinct eigenvalues, the difference between V and Vo will generally correspond to a different choice in orthonormal bases for each subspace corresponding to a repeated eigenvalue. So some columns of Vo will be linear combinations of columns in V. And that will generally lead to a different transformation of X. So you're right, the result of transforming the matrix of X into an eigenbasis of H will generally depend on which eigenbasis you choose. Even if you decided to stick with one set of basis eigenvectors, the representation of X in that basis would depend on how you numbered them. Of course, the order of the diagonal elements of D would also depend on this numbering, which is a different situation than what we were dealing with above.

***A similar statement is true for any nxn matrix A with n distinct eigenvalues...it has a unique set of normalized eigenvectors (not necessarily orthogonal). For example, let u_1, u_2, ..., u_n be eigenvectors of A corresponding to the eigenvalues E_1, E_2, ..., E_n. Since the E_j's are distinct, the u_j's are linearly independent and form a basis for the n-dimensional space. Now suppose a vector "w" is an eigenvector of A corresponding to eigenvalue E_k. We can write w as a linear combination of the basis vectors:

w = c_1 u_1 + c_2 u_2 + ... + c_n u_n

Applying A to both sides of this equation, we get

E_k w = c_1 E_1 u_1 + c_2 E_2 u_2 + ... + c_n E_n u_n

we also have

E_k w = c_1 E_k u_1 + c_2 E_k u_2 + ... + c_n E_k u_n .

Subtracting one from the other, we get

c_1 (E_k - E_1) u_1 + c_2 (E_k - E_2) u_2 + ...+ c_(k-1) (E_k - E_(k-1)) u_(k-1) + c_(k+1) (E_k - E_(k+1)) u_(k+1) + ... + c_n (E_k - E_n) u_n = 0 .

Since the u_j's are linearly independent, the coefficients must all be zero. But the eigenvalues are distinct, so this means that all the c_i's are zero except for c_k. In other words, w = c_k u_k, and is thus proportional to the original eigenvector u_k.
 
Last edited:
  • #4
In my case, I do not have degenerate eigenvalues. So, I guess that means my eigenvalues in H are distinct?

My problem is if H is very close to diagonal before I numerically diagonalize it, I expect the numerical diagonalization to give me a V that is close to the identity matrix. More importantly, I expect the V calculated when H is close to diagonal to not affect X at all. That is,

Vadj X V -> X

What I find when H is close to diagonal is that the eigenvectors transform
X into X plus linear combinations of the commutators of X and B.

I suppose I could adjust the phase of each column in V so I have ones on the diagonal. That should still give me a V that diagonalizes H.
 
  • #5
Oops. I goofed when I said
me said:
In this case, the only difference... Therefore Voadj X Vo = Vadj X V.
That equation isn't true.

You are right about the commutators of X and B appearing in your expression for Voadj X Vo. Here's why. Suppose, like you said, that V is almost the identity matrix. Then we can write it as

[tex]V = 1 + i \epsilon U[/tex]

where epsilon is a small parameter. V is unitary, which gives a condition on U:

[tex]VV^\dagger = (1+i\epsilon U)(1-i\epsilon U^\dagger) = 1 + i \epsilon (U - U^\dagger) + O(\epsilon^2)[/tex] .

For V to be unitary, we must have that [itex]U = U^\dagger[/itex] to first order in epsilon.

Then

[tex] V^\dagger X V = X + i\epsilon [X, U] + O(\epsilon^2)[/tex]

Now if [itex]V_o = V e^{i B}[/itex] where [itex][B, D] = 0[/itex], we get

[tex] V_o^\dagger X V_o = e^{-iB}Xe^{iB} + i\epsilon e^{-iB}[X, U]e^{iB} + O(\epsilon^2)[/tex]

But [tex]e^{-iB}Xe^{iB} = X - i[B, X] - \frac{1}{2}[B, [B, X]] + ...[/tex]

And that expansion is what you get for VoadjXVo to zeroth order in epsilon. Or I guess I should say, I think that is what you get. Does it agree with your result?
 
  • #6
Yes, you understand my problem. Your previous post, however, made me realize that the solution is quite simple.

I just adjust the phase of each eigenvector column in V so its diagonal elements are real. This doesn't affect the diagonalization of H (since it's an unobservable phase), and it puts my numerical V in a form that will match my perturbation expansion for it, namely,

V = 1 + V(1) + V(2) + ...

Not only does it give me the correct Ix matrix in the zeroth order limit, but the code seems to converge to the right signal with much larger dt steps.

I'm happy. :rofl:

I'm a little surprised I could find any discussion like this elsewhere on the web. Hopefully, this thread will be helpful to someone else in the future.

Thanks for your help.

Phil G <- that me - pjg
 
  • #7
pjg said:
Yes, you understand my problem. Your previous post, however, made me realize that the solution is quite simple.

I just adjust the phase of each eigenvector column in V so its diagonal elements are real. This doesn't affect the diagonalization of H (since it's an unobservable phase), and it puts my numerical V in a form that will match my perturbation expansion for it, namely,

V = 1 + V(1) + V(2) + ...

Not only does it give me the correct Ix matrix in the zeroth order limit, but the code seems to converge to the right signal with much larger dt steps.
...

That's great! Good work! So you are writing your own eigensolver?
 
  • #8
No, I just translated the EISPACK routines used in CH from fortran into C, and use that. It works well, and I don't have to worry about someone struggling to get the code compiled and linked to libraries.

Here's the snippet of code I wrote to solve the problem...

ch(Ham,eig,V,1,nstate,scratch,scr_cvec,scr_dvec);
for(r=1;r<=nstate;r++) {
Ax = V[r][r].r;
Ay = V[r][r].i;
norm = sqrt(Ax*Ax + Ay*Ay);
element.r = Ax/norm;
element.i = -Ay/norm;
for(s=1;s<=nstate;s++) {
V[r].r = element.r*V[r].r - element.i*V[r].i;
V.i = element.i*V[r].r + element.r*V[r].i;
}
}
 
Last edited:
  • #9
Okay, cool. I'm not familiar with CH. What is it?
 
  • #11
Thank you! :smile:
 
  • #12
There's was a mistake in my previous post. The fix should have been

for(r=1;r<=nstate;r++) {
V_r = V[r];
Ax = V_r[r].r;
Ay = V_r[r].i;
norm = sqrt(Ax*Ax + Ay*Ay);
Ax = Ax/norm;
Ay = Ay/norm;
for(s=1;s<=nstate;s++) {
element.r = V_r.r*Ax + V_r.i*Ay;
element.i = V_r.i*Ax - V_r.r*Ay;
V_r.r = element.r;
V_r.i = element.i;
}
}
 

1. What are eigenvectors and why are they important in coordinate transformation?

Eigenvectors are special vectors that do not change direction during a linear transformation. In other words, when a linear transformation is applied to an eigenvector, the resulting vector is a scalar multiple of the original eigenvector. Eigenvectors are important in coordinate transformation because they provide a way to simplify complex transformations and make them easier to understand and analyze.

2. How do I find eigenvalues and eigenvectors for a given matrix?

To find eigenvalues and eigenvectors for a given matrix, you can use the characteristic equation and the eigenvector equation. The characteristic equation is det(A-λI)=0, where A is the given matrix and λ is the eigenvalue. Once you have found the eigenvalues, you can plug them into the eigenvector equation (A-λI)v=0 to find the corresponding eigenvectors.

3. Can a matrix have more than one set of eigenvectors and eigenvalues?

Yes, a matrix can have multiple sets of eigenvectors and eigenvalues. This is because different eigenvectors may have the same eigenvalue, and a matrix can have repeated eigenvalues. In these cases, the eigenvectors corresponding to the same eigenvalue are not unique, but they still satisfy the eigenvector equation.

4. How are eigenvectors used in diagonalization and change of basis?

Eigenvectors are used in diagonalization and change of basis because they form a basis for the vector space. This means that any vector in the vector space can be expressed as a linear combination of the eigenvectors. In diagonalization, the eigenvectors are used to transform a matrix into a diagonal matrix, which makes it easier to perform calculations. In change of basis, the eigenvectors are used to find the transformation matrix that converts coordinates from one basis to another.

5. Can eigenvectors be complex numbers?

Yes, eigenvectors can be complex numbers. Complex eigenvalues and eigenvectors can arise when dealing with certain types of matrices, such as those with complex entries or non-real eigenvalues. In these cases, the eigenvectors will also be complex numbers, but they still satisfy the eigenvector equation and can be used in coordinate transformation.

Similar threads

Replies
3
Views
1K
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
Replies
4
Views
2K
Replies
40
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
878
  • Linear and Abstract Algebra
Replies
1
Views
910
  • Classical Physics
Replies
7
Views
634
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
1
Views
744
  • Linear and Abstract Algebra
2
Replies
43
Views
5K
Back
Top