Simultaneous Diagonalization

In summary, the conversation discusses the problem of finding common eigenvectors for two self-adjoint operators represented by Hermitian matrices in a 3-dimensional complex inner-product space. The problem involves finding a set of three orthogonal eigenvectors that are common to both matrices, which have degenerate eigenvalues. The conversation touches upon using the definition of the commutator and the concept of simultaneous diagonalization to approach the problem, as well as the possibility of finding a common eigenvector by taking the product of the two matrices. The conversation also addresses the issue of finding a multiple of an eigenvector to make it a common eigenvector for both matrices.
  • #1
I am having a hard with parts of this problem:

Suppose V is a 3-dimensional complex inner-product space. Let B1 = {|v1>,|v2>,|v3>} be an orthonormal basis for V.

Let H1 and H2 be self-adjoint operators represented in the basis B1 by the Hermitian matrices.

I won't list them, but they are 3x3 matrices. I have found the eigenvalues and eigenvectors of each. They both have degenerate eigenvalues. H1 has Lambda = -5,-5,15 and H2 has Lambda = 10,10,20.

My questions are:

(1) By considering the commutator, show that the two matrices can be simultaneously diagonalized. I know that by definition of the commutator, [H1,H2] = (H1)(H2) - (H2)(H1) = 0. Also, I know that D = (U-1)H(U), where D is the Diagonalized matrix. I need a hint on how to begin to show that they are simultaneously diagonalizable.

(2) As I mentioned before, I have found the eigenvectors for each. I am asked to find the eigenvectors common to both matrices and determine a set of three orthogonal eigenvectors. The eigenvectors I found are not the same for both matrices, but they are very similar in that they are basically flipped with a negative sign in one value. Do I need to do something to make these eigenvectors the same for both matrices or did I possibly make a math error.

(3) My last question is how do I verify under a unitary transformation to this basis that both matrices are diagonalized.

Any formulas, procedures, definitions, etc... would be greatly appreciated.
Physics news on
  • #2
To confirm it you would just do it. You can prove a theorem that says that two operators have a common set of eigenvectors if an only if they commute, so you could start by proving that. As for the second part, remember that any multiple of an eigenvector is also an eigenvector with the same eigenvalue.
  • #3
Thanks, that's helpful. Here is where I'm stuck... I checked my math and I am not finding any errors that are standing out. The eigenvectors I came up with are:

H1: |v1> = |v2> = ( 1 1 -1/3) and |v3> = (1/3 0 1 )
H2: |u1> = |u2> = (1/3 1 1 ) and |u3> = ( 1 0 -1/3)

These are actually column vectors (hence the ket notation) but it was easier to write them as rows. So, I can't figure out a multiple that will make |v1>,|v2> = |u1>,|u2> and |v3> = |u3>. And what about the negatives?
  • #4
MalleusScientiarum said:
You can prove a theorem that says that two operators have a common set of eigenvectors if an only if they commute...

When you say common set of eigenvectors, do you mean all eigenvectors in common or just some (like one in common, for instance)?


blanik: hermitian matrices always have a set of orthogonal eigenvectors that span the vector space they operate on. So you should have 3 different eigenvectors for each matrix, even though both matrices have degenerate eigenvalues. Having two eigenvalues the same means that there is a 2-d subspace of eigenvectors of that matrix. So whichever two eigenvectors you find corresponding to the degenerate eigenvalue, any linear combination of those two eigenvectors is also an eigenvector with the same eigenvalue.
Last edited:
  • #5
Well, the question specifically states the following:

"Find the eigenvectors common to both. (Note: Both matrices have degenerate eigenvalues. You will need to compare the eigenvectors of the two matrices and determine a set of three orthogonal eigenvectors common to both matrices.)"

So, the degenerate eigenvalues are -5 for H1 and 10 for H2. So, comparing the eigenvectors I listed earlier, how do I determine the set of three common to both?
  • #6
What if I were to find the eigenvectors of the product matrix of (H1)(H2) since (H1)(H2) = (H2)(H1)?
  • #7
I think finding the eigenvectors of the individual matrices H1 and H2, as you were doing before, is the right way to go. Here are some things to keep in mind. Recap of what I said in the edit of my last post: since H1 and H2 are hermitian operators on a 3-d vector space, they each have a set of (three) eigenvectors that span the space. So you need to find another eigenvector corresponding to -5 for H1 and another eigenvector corresponding to 10 for H2. For each matrix, you have a 2-d subspace (a plane through the origin) of eigenvectors corresponding to the degenerate eigenvalue.

Now since you determined that H1 and H2 commute, you can show that H1 preserves the 2-d subspace associated with H2, and vice versa. That is, if |x> is an eigenvector of H2 with eigenvalue 10, then so is the vector H1|x>. Thus H1 is a hermitian operator on this 2-d subspace, and has a pair of orthogonal eigenvectors that span this subspace. Similarly, H2 is a hermitian operator on the 2-d subspace associated with H1, and has a pair of orthogonal eigenvectors that span that subspace.

For a hermitian matrix, eigenvectors corresponding to different eigenvalues are orthongonal. Thus for any eigenbasis of H1, two vectors lie in plane P1 of vectors with eigenvalue -5, and the third lies along the line orthogonal to this plane. Similarly, for any eigenbasis of H2, two vectors lie in the plane P2 of vectors eigenvalue 10, and the third lies along the line orthogonal to this plane. There appear to be two main possibilities here. Either P1 and P2 are the same plane, or they aren't, in which case their intersection is a line. The first case, if it happens to hold, will certainly allow you to find a common eigenbasis for both matrices. If however P1 and P2 are not the same plane, it is still possible to find a common eigenbasis if P1 and P2 meet in a certain way.
  • #8
blanik, Here is (what I understand to be) the information you have given us:

= -5 \left(\begin{array}{c}3\\3\\-1\end{array}\right)[/tex]

= -5 \left(\begin{array}{c}?\\?\\?\end{array}\right)[/tex]

= 15 \left(\begin{array}{c}1\\0\\3\end{array}\right)[/tex]

= 10 \left(\begin{array}{c}1\\3\\3\end{array}\right)[/tex]

= 10 \left(\begin{array}{c}?\\?\\?\end{array}\right)[/tex]

= 20 \left(\begin{array}{c}3\\0\\-1\end{array}\right)[/tex]

[tex]H1 \;H2 = H2 \;H1[/tex]

where I have eliminated fractions for convenience.

As was earlier mentioned, you aren't showing all the eigenvectors, (you only listed one eigenvector of H1 with eigenvalue -5, and only one eigenvector of H2 with eigenvalue 10), but the information you have given is enough to determine the simultaneous eigenvectors.

Since (1,0,3) is a non degenerate eigenvector of H1, it must be an eigenvector of H2. Since it is not the non degenerate eigenvector of H2, it must be a degenerate eigenvector of H2. Hence (1,0,3) is a simultaneous eigenvector of H1 and H2. Similarly, (3,0,-1) is a nondegenerate eigenvector of H2, and so must be a degenerate eigenvector of H1. These give two of the three simultaneous eigenvectors for H1 and H2. The remaining simultaneous eigenvector is perpendicular, so can be chosen by cross product.

Computing (1,0,3) x (3,0,-1) = (0,10,0), so we can choose the perpendicular eigenvector as (0,1,0), which has eigenvalues of -5 with H1 and 10 with H2. The eigenvector equations are then:

= 15 \left(\begin{array}{c}1\\0\\3\end{array}\right)[/tex]

= 10 \left(\begin{array}{c}1\\0\\3\end{array}\right)[/tex]

= -5 \left(\begin{array}{c}3\\0\\-1\end{array}\right)[/tex]

= 20 \left(\begin{array}{c}3\\0\\-1\end{array}\right)[/tex]

= -5 \left(\begin{array}{c}0\\1\\0\end{array}\right)[/tex]

= 10 \left(\begin{array}{c}0\\1\\0\end{array}\right)[/tex]

Note that (1,0,3), (3,0,-1) and (0,1,0) are perpendicular, as required.

Since we have a set of simultaneous eigenvectors, we can also solve for H1 and H2. Since (0,1,0) is in diagonal form, we can assume that the (i,2) and (2,j) entries for H1 and H2 are -5 delta(i,j) and 10 delta(i,j), respectively. The other four entries for the matrices can be determined by a very small amount of arithmetic giving the matrices:

[tex]H1 = \left(\begin{array}{ccc}
-3 & 0 & 6 \\ 0 & -5 & 0 \\ 6 & 0 & 13 \end{array}\right)[/tex]

[tex]H2 = \left(\begin{array}{ccc}
19 & 0 & -3 \\ 0 & 10 & 0 \\ -3 & 0 & 11 \end{array}\right)[/tex]

It is easy to verify that these matrices commute and have the correct eigenvalues.

Last edited:

What is simultaneous diagonalization?

Simultaneous diagonalization is a process in linear algebra where multiple matrices are transformed into diagonal matrices simultaneously.

Why is simultaneous diagonalization useful?

Simultaneous diagonalization is useful because it simplifies the computation of matrix operations, making it easier to solve systems of linear equations and find eigenvalues and eigenvectors.

What are the requirements for simultaneous diagonalization?

To be able to simultaneously diagonalize matrices, they must have a common set of eigenvectors and be diagonalizable. This means that they must have linearly independent eigenvectors and be square matrices.

How is simultaneous diagonalization performed?

Simultaneous diagonalization is performed by first finding the eigenvectors of each matrix, and then constructing a matrix P using these eigenvectors. This matrix P is then used to transform the original matrices into diagonal matrices simultaneously.

How can simultaneous diagonalization be applied in real-world problems?

Simultaneous diagonalization can be applied in many areas such as physics, engineering, and computer science. It is commonly used in solving differential equations, optimization problems, and in data analysis techniques such as principal component analysis.

Suggested for: Simultaneous Diagonalization