MHB Shankar - Simultaneous Diagonalisation of Hermitian Matrices

bugatti79
Messages
786
Reaction score
4
Asked to determine the eigenvalues and eigenvectors common to both of these matrices of

\Omega=\begin{bmatrix}1 &0 &1 \\ 0& 0 &0 \\ 1& 0 & 1\end{bmatrix} and \Lambda=\begin{bmatrix}2 &1 &1 \\ 1& 0 &-1 \\ 1& -1 & 2\end{bmatrix}

and then to verify under a unitary transformation that both can be simultaneously diagonalised. Since Omega is degenerate and Lambda is not, you must be prudent in deciding which matrix dictates the choice of basis.

1)What does he mean by being prudent in the choice of matrix?

2)There is only one common eigenvalue which is \lambda=2 I expect the same eigenvector for both matrices for this value of \lambda=2? Wolfram alpha shows different eigenvectors for the same \lambda value.
eigenvector '{'1,0,1'}','{'0,0,0'}','{'1,0,1'}' - Wolfram|Alpha

eigenvector '{'2,1,1'}','{'1,0,-1'}','{'1,-1,2'}' - Wolfram|Alpha3) To show Simultaneous Diagonalisation I applied the unitary transformation as

U^{\dagger} \Omega U and U^{\dagger} \Lambda U to diagonalise the matrices with its entries being the eigenvalues where U are the corresponding columns of eigenvectors.

However, wolfram shows

'{''{'1, 0, 1'}', '{'-1, 0, 1'}', '{'0, 1, 0'}''}''*''{''{'1,0,1'}','{'0,0,0'}','{'1,0,1'}''}''*''{''{'1,-1,0'}','{'0,0,1'}','{'1,1,0'}''}' - Wolfram|Alpha

'{''{'1, 0, 1'}', '{'-1, -1, 1'}', '{'-1, 2, 1'}''}''*''{''{'2,1,1'}','{'1,0,-1'}','{'1,-1,2'}''}''*''{''{'1, -1, -1'}', '{'0, -1, 2'}', '{'1, 1, 1'}''}' - Wolfram|Alpha

Any ideas?
 
Physics news on Phys.org
Couple of comments:

1. Pick the non-degenerate matrix to get your eigenvectors. You're not guaranteed that the degenerate matrix's eigenvectors will span the space and be a basis.

2. You need the transformation to be unitary, which means the eigenvectors need to be orthonormal. Use Gram-Schmidt to orthonormalize the eigenbasis.
 
Ackbach said:
Couple of comments:

1. Pick the non-degenerate matrix to get your eigenvectors. You're not guaranteed that the degenerate matrix's eigenvectors will span the space and be a basis.

2. You need the transformation to be unitary, which means the eigenvectors need to be orthonormal. Use Gram-Schmidt to orthonormalize the eigenbasis.

2) I thought that hermitian matrices were orthgonal as per the 4th point of properties in link wiki https://en.wikipedia.org/wiki/Hermitian_matrix
Thats why i didn't orthogonalise them...

Eigenvectors of Hermitian Matrix
eigenvectors '{''{'2,1,1'}','{'1,0,-1'}','{'1,-1,2'}''}' - Wolfram|Alpha

Check are eigenvectors orthogonal ( I put the eigenvectors into a matrix)
'{''{'1, 0, 1'}', '{'-1, -1, 1'}', '{'-1, 2, 1'}''}' orthogonal - Wolfram|Alpha

Is the wiki wrong?
 
bugatti79 said:
2) I thought that hermitian matrices were orthgonal as per the 4th point of properties in link wiki https://en.wikipedia.org/wiki/Hermitian_matrix
Thats why i didn't orthogonalise them...

Eigenvectors of Hermitian Matrix
eigenvectors '{''{'2,1,1'}','{'1,0,-1'}','{'1,-1,2'}''}' - Wolfram|Alpha

Check are eigenvectors orthogonal ( I put the eigenvectors into a matrix)
'{''{'1, 0, 1'}', '{'-1, -1, 1'}', '{'-1, 2, 1'}''}' orthogonal - Wolfram|Alpha

Is the wiki wrong?

Ok, fair point about orthogonality. However, orthogonal does not imply orthonormal. An orthonormal basis is orthogonal AND every vector has length 1. You need an orthonormal set of eigenvectors to form an orthogonal matrix. That's not a typo: orthogonal matrix implies the columns are orthonormal.

So your process is simpler if your original matrices are Hermitian: once you get the eigenvectors, just normalize them.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top