Something about hermitian matrixes

  • Thread starter twoflower
  • Start date
  • Tags
    Hermitian
In summary, Matt explained that the theorem states that there exists a unitary matrix such that the eigenvalues of the hermitian matrix A are all real numbers. Furthermore, he goes on to say that the proof of this theorem is based on induction, and that the final step is to show that the matrix R_n is the matrix we're looking for.
  • #1
twoflower
368
0
Hi all,

I don't understand to one part of proof of this theorem:

All eigenvalues of each hermitian matrix A are real numbers and, moreover, there exists unitary matrix R such, that

[tex]
R^{-1}AR
[/tex]

is diagonal


Proof: By induction with respect to n (order of matrix A)

For n = 1 it's obvious.
Suppose that the theorem holds for 1, 2, ..., n-1
We know that [itex]\exists[/itex] eigenvalue [itex]\lambda[/itex] and appropriate eigenvector [itex]x \in \mathbb{C}[/itex].
Using Steinitz's theorem, we can extend [itex]x[/itex] to orthonormal base of [itex]\mathbb{C}^{n}[/itex].
Suppose that [itex]||x|| = 1[/itex] and construct matrix [itex]P_n[/itex] from vectors of this base ([itex]P_n[/itex] will have these vectors in its columns).

[itex]P_n[/itex] is unitary [itex]\Leftarrow P_{n}^{H}P_n = I[/itex], because standard inner product of two different vectors in the orthonormal base is zero and inner product of two identical vectors is 1.

This holds:

[tex]
\left(P_{n}^{H}A_{n}P_{n}\right)^{H} = P_{n}^{H}A_{n}^{H}\left(P_{n}^{H}\right)^{H} = P_{n}^{H}A_{n}P_{n}
[/tex]

Last line is what I don't understand, probably it's trivial but I can't see that

[tex]
\left(P_{n}^{H}A_{n}P_{n}\right)^{H} = \left(P_{n}^{H}\right)^{H}A_{n}^{H}P_{n}^{H} = P_{n}^{H}A_{n}^{H}\left(P_{n}^{H}\right)^{H}
[/tex]

(the second equality)

Thank you for the explanation.
 
Physics news on Phys.org
  • #2
because you're forgetting that taking the daggeer reverses the order of the matrices i'l; use start instead, but (AB)*=B*A*

the second equality as you have it is wrong, but then it isn't supposed to be true.
 
  • #3
matt grime said:
because you're forgetting that taking the daggeer reverses the order of the matrices i'l; use start instead, but (AB)*=B*A*

the second equality as you have it is wrong, but then it isn't supposed to be true.

Thank you a lot Matt, I was looking at it for ten minutes and it's as simple as normal transposition :rolleyes:
 
  • #4
Well,the first part (the real values of eigenvalues of hermitean operators) can be proven quite easily for a hermitean linear operator defined on a dense everywhere subset of a separable Hilbert space.

Daniel.
 
  • #5
The key to the second part is to remark that that matrix

[tex] M^{\dagger}AM \ ,\ M\in U(n,\mathbb{C})[/tex]

is hermitean,which means that the linear operator associated is hermitean.A hermitean linear operator in a finite dimensional complex Hilbert space admits a spectral decomposition (moreover,the spectrum is purely discrete),which means that the operator [itex] M^{\dagger}AM [/itex] has zero off-diagonal matrix elements.

Daniel.
 
  • #6
dextercioby said:
The key to the second part is to remark that that matrix

[tex] M^{\dagger}AM \ ,\ M\in U(n,\mathbb{C})[/tex]

is hermitean,which means that the linear operator associated is hermitean.A hermitean linear operator in a finite dimensional complex Hilbert space admits a spectral decomposition (moreover,the spectrum is purely discrete),which means that the operator [itex] M^{\dagger}AM [/itex] has zero off-diagonal matrix elements.

Daniel.

Thank you Daniel for this explanation, but I don't have a clue what Hilbert space is (I only heard of it) and what hermitean linear operator is.

However, I've been studying the proof on and I again encountered place I don't understand to.

If I continue from where I finished my initial post:

...

And thus [itex]P_{n}^{H}A_{n}P_{n}[/itex] is hermitian matrix.

Next,

[tex]
\left( \begin{array}{cc} \lambda & 0 ... 0 \\ 0 & A_{n-1} \\ 0 \end{array} \right)
[/tex]

Because this matrix is equal to its hermitian transposition, [itex]\lambda \in \mathbb{R}[/itex]

// I'm not sure why this matrix is here and whether it should mean that it is the matrix [itex]P_{n}^{H}A_{n}P_{n}[/itex], I really don't know...anyway, let's continue

From induction presumption [itex]\exists[/itex] unitary matrix [itex]R_{n-1}[/itex] such, that

[tex]
R_{n-1}^{-1}A_{n-1}R_{n-1} = D_{n-1}
[/tex]

Let's take

[tex]
S = \left(\begin{array}{cc} 1 & 0 ... 0 \\ 0 & R_{n-1} \\ 0 \end{array} \right)
[/tex]

[tex]
R_n = P_{n}S
[/tex]

S is unitary, as well as [itex]P_{n}[/itex]. Is also their product unitary? (In another words, is product of two unitary matrixes unitary matrix?) Let's see.

[tex]
R_{n}^{H}R_{n} = \left(P_{n}S\right)^{H}P_{n}S = S^{H}P_{n}^{H}P_{n}S = I
[/tex]

So, it holds that [itex]R_{n}[/itex] is also unitary. Is [itex]R_{n}[/itex] the matrix we're looking for?

[tex]
R_{n}^{-1}A_{n}R_{n} = \left(P_{n}S\right)^{H}AP_{n}S = S^{H}P_{n}^{H}AP_{n}S = \left(\begin{array}{cc} 1 & 0 ... 0 \\ 0 & R_{n-1}^{H} \\ 0 \end{array} \right)
\left(\begin{array}{cc} \lambda & 0 ... 0 \\ 0 & A_{n-1} \\ 0 \end{array} \right)
\left(\begin{array}{cc} 1 & 0 ... 0 \\ 0 & R_{n-1} \\ 0 \end{array} \right) =
\left(\begin{array}{cc} \lambda & 0 ... 0 \\ 0 & D_{n-1} \\ 0 \end{array} \right) = D
[/tex]

Q.E.D

What I don't understand is that according to this,

[tex]
P_{n}^{H}AP_{n} = \left(\begin{array}{cc} \lambda & 0 ... 0 \\ 0 & A_{n-1} \\ 0 \end{array} \right)
[/tex]

Why?

Thank you.
 

1. What is a Hermitian matrix?

A Hermitian matrix, also known as a self-adjoint matrix, is a square matrix that is equal to its own conjugate transpose. In simpler terms, it is a matrix that is equal to its own mirror image.

2. What are the properties of a Hermitian matrix?

One of the main properties of a Hermitian matrix is that its eigenvalues are always real numbers. Additionally, its eigenvectors are orthogonal to each other, and the matrix itself is diagonalizable. It also has symmetric diagonal elements.

3. How is a Hermitian matrix different from a symmetric matrix?

A Hermitian matrix is a special case of a symmetric matrix. While both have symmetric diagonal elements, a Hermitian matrix must also have complex conjugate elements in its non-diagonal entries, while a symmetric matrix can only have real numbers in its non-diagonal entries.

4. What are some applications of Hermitian matrices?

Hermitian matrices have many applications in physics, particularly in quantum mechanics. They are also used in signal processing and image recognition algorithms, as well as in machine learning and data analysis.

5. How can I determine if a matrix is Hermitian?

A matrix is Hermitian if it is equal to its own conjugate transpose. This means that if you take the complex conjugate of each element and then transpose the matrix, the resulting matrix should be identical to the original matrix. Another way to check is to see if all the eigenvalues are real numbers.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
928
  • Linear and Abstract Algebra
Replies
15
Views
4K
Replies
3
Views
2K
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Calculus and Beyond Homework Help
Replies
13
Views
2K
  • Differential Equations
Replies
1
Views
1K
Back
Top