Something about hermitian matrixes

  • Thread starter twoflower
  • Start date
Hi all,

I don't understand to one part of proof of this theorem:

All eigenvalues of each hermitian matrix A are real numbers and, moreover, there exists unitary matrix R such, that

[tex]
R^{-1}AR
[/tex]

is diagonal


Proof: By induction with respect to n (order of matrix A)

For n = 1 it's obvious.
Suppose that the theorem holds for 1, 2, ..., n-1
We know that [itex]\exists[/itex] eigenvalue [itex]\lambda[/itex] and appropriate eigenvector [itex]x \in \mathbb{C}[/itex].
Using Steinitz's theorem, we can extend [itex]x[/itex] to orthonormal base of [itex]\mathbb{C}^{n}[/itex].
Suppose that [itex]||x|| = 1[/itex] and construct matrix [itex]P_n[/itex] from vectors of this base ([itex]P_n[/itex] will have these vectors in its columns).

[itex]P_n[/itex] is unitary [itex]\Leftarrow P_{n}^{H}P_n = I[/itex], because standard inner product of two different vectors in the orthonormal base is zero and inner product of two identical vectors is 1.

This holds:

[tex]
\left(P_{n}^{H}A_{n}P_{n}\right)^{H} = P_{n}^{H}A_{n}^{H}\left(P_{n}^{H}\right)^{H} = P_{n}^{H}A_{n}P_{n}
[/tex]

Last line is what I don't understand, probably it's trivial but I can't see that

[tex]
\left(P_{n}^{H}A_{n}P_{n}\right)^{H} = \left(P_{n}^{H}\right)^{H}A_{n}^{H}P_{n}^{H} = P_{n}^{H}A_{n}^{H}\left(P_{n}^{H}\right)^{H}
[/tex]

(the second equality)

Thank you for the explanation.
 

matt grime

Science Advisor
Homework Helper
9,394
3
because you're forgetting that taking the daggeer reverses the order of the matrices i'l; use start instead, but (AB)*=B*A*

the second equality as you have it is wrong, but then it isn't supposed to be true.
 
matt grime said:
because you're forgetting that taking the daggeer reverses the order of the matrices i'l; use start instead, but (AB)*=B*A*

the second equality as you have it is wrong, but then it isn't supposed to be true.
Thank you a lot Matt, I was looking at it for ten minutes and it's as simple as normal transposition :rolleyes:
 

dextercioby

Science Advisor
Homework Helper
Insights Author
12,928
504
Well,the first part (the real values of eigenvalues of hermitean operators) can be proven quite easily for a hermitean linear operator defined on a dense everywhere subset of a separable Hilbert space.

Daniel.
 

dextercioby

Science Advisor
Homework Helper
Insights Author
12,928
504
The key to the second part is to remark that that matrix

[tex] M^{\dagger}AM \ ,\ M\in U(n,\mathbb{C})[/tex]

is hermitean,which means that the linear operator associated is hermitean.A hermitean linear operator in a finite dimensional complex Hilbert space admits a spectral decomposition (moreover,the spectrum is purely discrete),which means that the operator [itex] M^{\dagger}AM [/itex] has zero off-diagonal matrix elements.

Daniel.
 
dextercioby said:
The key to the second part is to remark that that matrix

[tex] M^{\dagger}AM \ ,\ M\in U(n,\mathbb{C})[/tex]

is hermitean,which means that the linear operator associated is hermitean.A hermitean linear operator in a finite dimensional complex Hilbert space admits a spectral decomposition (moreover,the spectrum is purely discrete),which means that the operator [itex] M^{\dagger}AM [/itex] has zero off-diagonal matrix elements.

Daniel.
Thank you Daniel for this explanation, but I don't have a clue what Hilbert space is (I only heard of it) and what hermitean linear operator is.

However, I've been studying the proof on and I again encountered place I don't understand to.

If I continue from where I finished my initial post:

...

And thus [itex]P_{n}^{H}A_{n}P_{n}[/itex] is hermitian matrix.

Next,

[tex]
\left( \begin{array}{cc} \lambda & 0 ..... 0 \\ 0 & A_{n-1} \\ 0 \end{array} \right)
[/tex]

Because this matrix is equal to its hermitian transposition, [itex]\lambda \in \mathbb{R}[/itex]

// I'm not sure why this matrix is here and whether it should mean that it is the matrix [itex]P_{n}^{H}A_{n}P_{n}[/itex], I really don't know...anyway, let's continue

From induction presumption [itex]\exists[/itex] unitary matrix [itex]R_{n-1}[/itex] such, that

[tex]
R_{n-1}^{-1}A_{n-1}R_{n-1} = D_{n-1}
[/tex]

Let's take

[tex]
S = \left(\begin{array}{cc} 1 & 0 ..... 0 \\ 0 & R_{n-1} \\ 0 \end{array} \right)
[/tex]

[tex]
R_n = P_{n}S
[/tex]

S is unitary, as well as [itex]P_{n}[/itex]. Is also their product unitary? (In another words, is product of two unitary matrixes unitary matrix?) Let's see.

[tex]
R_{n}^{H}R_{n} = \left(P_{n}S\right)^{H}P_{n}S = S^{H}P_{n}^{H}P_{n}S = I
[/tex]

So, it holds that [itex]R_{n}[/itex] is also unitary. Is [itex]R_{n}[/itex] the matrix we're looking for?

[tex]
R_{n}^{-1}A_{n}R_{n} = \left(P_{n}S\right)^{H}AP_{n}S = S^{H}P_{n}^{H}AP_{n}S = \left(\begin{array}{cc} 1 & 0 ..... 0 \\ 0 & R_{n-1}^{H} \\ 0 \end{array} \right)
\left(\begin{array}{cc} \lambda & 0 ..... 0 \\ 0 & A_{n-1} \\ 0 \end{array} \right)
\left(\begin{array}{cc} 1 & 0 ..... 0 \\ 0 & R_{n-1} \\ 0 \end{array} \right) =
\left(\begin{array}{cc} \lambda & 0 ..... 0 \\ 0 & D_{n-1} \\ 0 \end{array} \right) = D
[/tex]

Q.E.D

What I don't understand is that according to this,

[tex]
P_{n}^{H}AP_{n} = \left(\begin{array}{cc} \lambda & 0 ..... 0 \\ 0 & A_{n-1} \\ 0 \end{array} \right)
[/tex]

Why?

Thank you.
 

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top