Is the product of two hermitian matrices always hermitian?

  • I
  • Thread starter Happiness
  • Start date
  • Tags
    Hermitian
  • Featured
In summary, the conversation discusses the contradiction between the fact that the product of two Hermitian matrices is Hermitian only if they commute, and the case of ##p^4## not being Hermitian for hydrogen states with ##l=0## when ##p^2## is Hermitian. The discussion also touches on the implications of this for the hydrogen radial wave functions and their differentiability. It is concluded that while ##p^2## is approximately Hermitian, it is not exact, and the error function makes the difference in the case of ##p^4##.
  • #71
Hans de Vries said:
The first order derivative term is real anti-symmetric.
Only in the Lebesgue inner product. But the transformation to spherical coordinates changes the inner product.

Thus you are working in the wrong inner product!
 
Physics news on Phys.org
  • #72
Hans de Vries said:
There seems to be no reason that a combination of ##p^4## and ##r^2## would also make a valid self-adjoint operator as in the trivial cases.
All these operators are self-adjoint or more precisely essentially self-adjoint. The important lesson to be learned is

(a) Hermiticity is not sufficient for an operator to represent an observable; it must be an essentially self-adjoint operator
(b) In QT the operators describing observables with a continuous or partially continuous spectrum have a domain and co-domain which is smaller than the entire Hilbert space. For position and momentum you can use some Schwartz space of quickly falling functions, which is dense in the Hilbert space.

Here we have an example, where some eigenfunctions of the Hamiltonian belong not to the domain of the operators we are interested in. Though ##\hat{p}^2## is well defined applied to these states, the result is not in the domain. That's why another application of ##\hat{p}^2## (making together ##\hat{p}^4##) leads to trouble. Again, for a good treatment check the two nice pedagogical papers

https://arxiv.org/abs/quant-ph/9907069
https://arxiv.org/abs/quant-ph/0103153

To make the usual sloppy physicists' math rigorous (implying of course taking the caveats in situations like the here discussed with ##\hat{p}^4## seriously), the most elegant way is to use the "rigged Hilbert space". This you find in the following dissertation

http://galaxy.cs.lamar.edu/~rafaelm/webdis.pdf
and in the textbook

A. Galindo, P. Pascual, Quantum Mechanics, Springer Verlag, Heidelberg (1990), 2 Vols.
 
  • Like
Likes Demystifier
  • #73
Demystifier said:
You are misunderstanding. The problem is not the value of the integral over a function. The problem is the value of a function itself at ##r=0##, which appears as a boundary term after a partial integration. In Cartesian coordinates there is simply no boundary at ##r=0##, so in partial integration one does not need to worry about it.

Note that for any arbitrary real Anti-symmetric matrix ##\mathcal{A}## the following holds:
$$\langle\,\psi\,|\,\mathcal{A}\,\psi\,\rangle = 0 ~~~~~~~~~~
\langle\,\mathcal{A}\,\psi\,|\,\psi\,\rangle = 0 ~~~~~~~~~~
\langle\,\psi^*\,|\,\mathcal{A}\,\psi\,\rangle = 0 ~~~~~~~~~~
\langle\,\mathcal{A}\,\psi^*\,|\,\psi\,\rangle = 0 ~~~~~~~~~~$$
Therefor we need two independent wave functions for the Self-Adjoint test for an operator ##\mathcal{L} ## like in:
$$\langle\,\psi^*\,|\,\mathcal{L}\,\varphi\,\rangle~~=~~
\langle\,\mathcal{L}\,\psi^*\,|\,\varphi\,\rangle $$
Any arbitrary real ##\mathcal{L=S\!+\!A} ## with a symmetric part and an anti-symmetric part would pass the "test" below for being Self-Adjoint:
$$\langle\,\psi^*\,|\,\mathcal{L}\,\psi\,\rangle~~=~~
\langle\,\mathcal{L}\,\psi^*\,|\,\psi\,\rangle$$
Because the anti-symmetric part is eliminated during the calculation.
 
  • #74
vanhees71 said:
Here we have an example, where some eigenfunctions of the Hamiltonian belong not to the domain of the operators we are interested in.

We can not just use a specific eigenfunction ##\psi##, for instance the radial part of (##\ell=0##) in combination with the Self_Adjoint test because we need two independent wavefunctions. See post #73

The boundery-term in your calculation in #62 for ##\hat{p}^2## does not cancel in the case of two independent wave-functions and one needs to rely on the ##r^2## factor to make it 0.
 
Last edited:
  • #75
A. Neumaier said:
Only in the Lebesgue inner product. But the transformation to spherical coordinates changes the inner product.

Thus you are working in the wrong inner product!
According to Arfken & Weber (10.6) the operator ##p^2## is not self-adjoint in a Cartesian inner product but it is self-adjoint in a spherical radial inner product.

See the ##r^2## factor in post #69.
 
Last edited:
  • #76
Hans de Vries said:
We can not just use a specific eigenfunction ##\psi##, for instance the radial part of (##\ell=0##) in combination with the Self_Adjoint test because we need two independent wavefunctions. See post #73

The boundery-term in your calculation in #62 for ##\hat{p}^2## does not cancel in the case of two independent wave-functions and one needs to rely on the ##r^2## factor to make it 0.
Sure, you can take any two ##\ell=0## wave functions. All belong not to the domain of ##\hat{p}^2## as an essentially self-adjoint operator. I think, Griffiths has it right here.
 
  • #77
Hans de Vries said:
According to Arfken & Weber (10.6) the operator ##p^2## is not self-adjoint in a carthesian inner product but it is self-adjoint in a spherical radial inner product.

See the ##r^2## factor in post #69.
Sure, that's what I've shown above.
 
  • #78
Demystifier said:
You are misunderstanding. The problem is not the value of the integral over a function. The problem is the value of a function itself at r=0r=0r=0, which appears as a boundary term after a partial integration. In Cartesian coordinates there is simply no boundary at r=0r=0r=0, so in partial integration one does not need to worry about it.
Sure, I understand what you were saying there, but I do not think it is enough to just wave your hands and say “at infinity everything will be fine” without deriving what the terms at infinity really would be. Anyway, I was pointing the the last equation in #7. Do you agree that, if we just calculate both these integral, i.e. define the function ##\Phi_n = p^4 \Psi_m## and calculate

##\int \mathop{d^3 x} \Psi_n \Phi_m##

and

##\int \mathop{d^3 x} \Phi_n \Psi_m##

and take their difference we get a definite answer (if it is 0 ##p^4## is hermitian, otherwise not)?

Now we already know the answer if we calculate everything in spherical coordinates. So if we get a different answer in cartesian coordinates, the value of these integrals depends on what coordinate we choose to calculate them (or at least, it is not valid to go between cartesian and spherical coordinates). As these are all reasonably behaved functions, this would beg the question in what case spherical coordinates can be used anyway (though I would love to learn about subtleties).
 
  • Like
Likes Demystifier
  • #79
Hans de Vries said:
According to Arfken & Weber (10.6) the operator p2p2p^2 is not self-adjoint in a carthesian inner product but it is self-adjoint in a spherical radial inner product.

What is a “carthesian inner product”? The usual inner product should be

##\langle\Psi|\Phi\rangle = \int \mathop{d^3 x} \Psi^*\Phi##

and it should not matter what coordinates I choose in practice.
 
  • #80
Dr.AbeNikIanEdL said:
What is a “carthesian inner product”? The usual inner product should be

##\langle\Psi|\Phi\rangle = \int \mathop{d^3 x} \Psi^*\Phi##

and it should not matter what coordinates I choose in practice.
If you use a nonlinear transformation of the coordinates as new coordinates, the integral inherits an additional Jacobian determinant, and hence looks different. Thus the choice of coordinates matters;
the general form of the inner product is ##\langle\Psi|\Phi\rangle = \int \mathop{d^3 x} w(x)\Psi(x)^*\Phi(x)## with a weight ##w(x)\ge0##, and this weight is different in different coordinate systems.
 
  • Like
Likes vanhees71 and dextercioby
  • #81
Yes, but ##w(x)## is different in different coordinates systems in a way that the integral value stays the same, right? I was assuming this is implicitly contained in the notation ##\mathop{d^3 x}##, i.e. in cartesian coordinates

##\mathop{d^3 x} = \mathop{dx} \mathop{dy} \mathop{dz}##

whereas in spherical coordinates

##\mathop{d^3 x} = \mathop{d\phi} \mathop{d\cos\theta} r^2 \mathop{dr}##.

In that sense I don't understand how the inner product can be “cartesian”.
 
  • #82
Dr.AbeNikIanEdL said:
Yes, but ##w(x)## is different in different coordinates systems in a way that the integral value stays the same, right? I was assuming this is implicitly contained in the notation ##\mathop{d^3 x}##, i.e. in cartesian coordinates

##\mathop{d^3 x} = \mathop{dx} \mathop{dy} \mathop{dz}##

whereas in spherical coordinates

##\mathop{d^3 x} = \mathop{d\phi} \mathop{d\cos\theta} r^2 \mathop{dr}##.

In that sense I don't understand how the inner product can be “cartesian”.
To a Cartesian coordinate system corresponds the weight ##w(x)=1##. Note that ##x## is just a dummy variables and can as well be the Cartesian ##(x_1,x_2,x_3)## as the spherical ##(r,\phi,\theta)##; for the latter, ##w(x)=r^2\sin\theta##.
 
  • Like
Likes vanhees71
  • #83
A. Neumaier said:
Note that xxx is just a dummy variables and can as well be the Cartesian (x1,x2,x3)(x1,x2,x3)(x_1,x_2,x_3) as the spherical (r,ϕ,θ)(r,ϕ,θ)(r,\phi,\theta); for the latter, w(x)=r2sinθw(x)=r2sin⁡θw(x)=r^2\sin\theta.

Which are exactly the expressions I wrote above. My point is that

##\langle\Psi|\Phi\rangle = \int \mathop{dx}\mathop{dy}\mathop{dz} \Psi^*(x,y,z)\Phi(x,y,z) = \int \mathop{d\phi}\mathop{d\theta}\mathop{dr} r^2 \sin\theta \Psi^*(r,\theta,\phi)\Phi(r,\theta,\phi)##

so what sense does it make to call the inner product “cartesian” or “spherical”? It is just “integral over ##R^3##”, no matter what coordinates I choose to perform that integral.
 
  • #84
Dr.AbeNikIanEdL said:
My point is that

##\langle\Psi|\Phi\rangle = \int \mathop{dx}\mathop{dy}\mathop{dz} \Psi^*(x,y,z)\Phi(x,y,z) = \int \mathop{d\phi}\mathop{d\theta}\mathop{dr} r^2 \sin\theta \Psi^*(r,\theta,\phi)\Phi(r,\theta,\phi)##

so what sense does it make to call the inner product “cartesian” or “spherical”? It is just “integral over ##R^3##”, no matter what coordinates I choose to perform that integral.
It is the Lebesgue integral over ##R^3##, but only if you say that ##x## denote Cartesian coordinates.
 
  • #85
Ok, so this is supposed to be an cartesian inner product over spherical coordinates, i.e.

##\langle\Psi|\Phi\rangle = \int \mathop{d\phi}\mathop{d\theta}\mathop{dr} \Psi^*(r,\theta,\phi)\Phi(r,\theta,\phi)##

that then is somehow only defined on the subspace where the integral converges? What am I learning from this other than that it is a strange definition of the inner product?
 
  • #86
Dr.AbeNikIanEdL said:
Ok, so this is supposed to be an cartesian inner product over spherical coordinates, i.e.

##\langle\Psi|\Phi\rangle = \int \mathop{d\phi}\mathop{d\theta}\mathop{dr} \Psi^*(r,\theta,\phi)\Phi(r,\theta,\phi)##

that then is somehow only defined on the subspace where the integral converges? What am I learning from this other than that it is a strange definition of the inner product?
Rhi is a valid inner product defining a Hilbert space in which id/dr is self adjoint. It is equivalent to the physical inner product when one rescales the wave function by the square root of the weight obtained by the substitution rule.
 
  • #87
Dr.AbeNikIanEdL said:
Anyway, I was pointing the the last equation in #7. Do you agree that, if we just calculate both these integral, i.e. define the function ##\Phi_n = p^4 \Psi_m## and calculate

##\int \mathop{d^3 x} \Psi_n \Phi_m##

and

##\int \mathop{d^3 x} \Phi_n \Psi_m##

and take their difference we get a definite answer (if it is 0 ##p^4## is hermitian, otherwise not)?
I agree.

Dr.AbeNikIanEdL said:
Now we already know the answer if we calculate everything in spherical coordinates. So if we get a different answer in cartesian coordinates, the value of these integrals depends on what coordinate we choose to calculate them (or at least, it is not valid to go between cartesian and spherical coordinates). As these are all reasonably behaved functions, this would beg the question in what case spherical coordinates can be used anyway (though I would love to learn about subtleties).
I think I should do a careful calculation by myself, before that I cannot tell anything definite.
 
  • #88
fresh_42 said:
What would be ##P## to get ##P^2= -\dfrac{\hbar^2}{r^2}\dfrac{d}{dr}\left( r^2\dfrac{d}{dr} \right)##?

Actually ##P## (or better ##\hat{p}_r##) here is given by the radial part of the Spherical Polar form of the Dirac equation. The following is from Paul Strange's book

PStrage_2.JPG
Look at (8.9) and concentrate on the essential radial part of ##\hat{p}##:

$$\hat{p}_r ~=~i\tilde{\gamma}_5\tilde{\sigma}_r\left(\hbar\dfrac{\partial}{\partial r}+\dfrac{\hbar}{r}\right)$$

If we square this radial part then we get ##p_r^2## because the ##\tilde{\sigma}_r## anti-commute but the two terms commute.

$$\hat{p}_r^2 ~=~\left[~i\tilde{\gamma}_5\tilde{\sigma}_r\left(\hbar\dfrac{\partial}{\partial r}+\dfrac{\hbar}{r} \right)~\right]^2 ~=~ -\hbar^2\left(\dfrac{\partial^2}{\partial r^2}+\dfrac{2}{r}\dfrac{\partial}{\partial r} \right)~=~ -\dfrac{\hbar^2}{r^2}\dfrac{\partial}{\partial r}\left( r^2\dfrac{\partial}{\partial r} \right)$$

The definition of ##\tilde{K}## which contains the angular parts of ##\hat{p}## using the angular momentum operators is:

PStrage_3.JPG
 
Last edited:
  • #89
fresh_42 said:
What would be ##P## to get ##P^2= -\dfrac{\hbar^2}{r^2}\dfrac{d}{dr}\left( r^2\dfrac{d}{dr} \right)##?

So, with the (correct) version of the post above we may write more general for ##P=\hat{p}_r##:

##\hat{p}_r~~=~~ i\hbar\left(\dfrac{\partial}{\partial r} +\dfrac{1}{r}\right)##

For an arbitrary power ##\hat{p}_r^n## we can write

##\hat{p}^n_r~~=~~ (i\hbar)^n\left(\dfrac{\partial}{\partial r} +\dfrac{n}{r}\right)\left(\dfrac{\partial}{\partial r}\right)^{n-1}##

or alternatively:

##\hat{p}^n_r~~=~~ (i\hbar)^n\left(\dfrac{1}{r^n}\dfrac{\partial}{\partial r}r^n\right)\left(\dfrac{\partial}{\partial r}\right)^{n-1}##
 
  • #90
Dr.AbeNikIanEdL said:
Now we already know the answer if we calculate everything in spherical coordinates. So if we get a different answer in cartesian coordinates, the value of these integrals depends on what coordinate we choose to calculate them (or at least, it is not valid to go between cartesian and spherical coordinates). As these are all reasonably behaved functions, this would beg the question in what case spherical coordinates can be used anyway (though I would love to learn about subtleties).
Ah, now I found the error in my argument. When one does partial integration in Cartesian coordinates, one encounters sub-integrals of the form
$$\int_{-\infty}^{\infty}dx \, \partial_x F(x,y,z)=F(\infty,y,z)-F(-\infty,y,z)$$
Naively I thought that such terms vanish because ##F## exponentially vanishes for ##x\rightarrow\pm\infty##. But that's not necessarily true for ##y,z\rightarrow 0##, because ##F(x,y,z)## diverges for ##y,z\rightarrow 0##. So now I agree with older statements by @vanhees71 and @A. Neumaier that the source of the problem is divergence of the potential at ##r=0##. If the potential has been regularized for small ##r##, then the Hamiltonian eigenfunctions would have well defined derivatives at ##r=0## and the problem would disappear.

And by the way, we already had a thread with the same question: https://www.physicsforums.com/threa...n-for-hydrogen-like-l-0-wavefunctions.563295/
 
Last edited:
  • Like
Likes Dr.AbeNikIanEdL
  • #91
Hans de Vries said:
So, with the (correct) version of the post above we may write more general for ##P=\hat{p}_r##:

##\hat{p}_r~~=~~ i\hbar\left(\dfrac{\partial}{\partial r} +\dfrac{1}{r}\right)##

For an arbitrary power ##\hat{p}_r^n## we can write

##\hat{p}^n_r~~=~~ (i\hbar)^n\left(\dfrac{\partial}{\partial r} +\dfrac{n}{r}\right)\left(\dfrac{\partial}{\partial r}\right)^{n-1}##

or alternatively:

##\hat{p}^n_r~~=~~ (i\hbar)^n\left(\dfrac{1}{r^n}\dfrac{\partial}{\partial r}r^n\right)\left(\dfrac{\partial}{\partial r}\right)^{n-1}##
But these powers have different domains, which causes the problems discussed in the present thread.
 
  • Like
Likes dextercioby
  • #92
  • Like
  • Informative
Likes A. Neumaier, Demystifier and dextercioby
  • #94
A. Neumaier said:
It is the Lebesgue integral over ##R^3##, but only if you say that ##x## denote Cartesian coordinates.
The integral is over Euclidean ##\mathbb{R}^3## and as such the volume element is independent of the choice of coordinates,
$$\mathrm{d}^3 x = \epsilon_{ijk} \partial_i \vec{x} \partial_j \vec{x} \partial_k \vec{x} \mathrm{d}^3 q.$$
Of course, it's this specific integral measure to be used in the Hilbert space, because we are dealing with a representation/realization of the Galilei group, where space is Euclidean.
 
  • Like
Likes weirdoguy
<h2>1. What does it mean for a matrix to be hermitian?</h2><p>A hermitian matrix is a square matrix that is equal to its own conjugate transpose. This means that the elements on the main diagonal are real numbers, and the elements below the main diagonal are the complex conjugates of the elements above the main diagonal.</p><h2>2. Can a non-square matrix be hermitian?</h2><p>No, a matrix must be square in order to be considered hermitian. This is because the conjugate transpose operation only applies to square matrices.</p><h2>3. Is the product of two hermitian matrices always hermitian?</h2><p>Yes, the product of two hermitian matrices is always hermitian. This is because the product of two matrices is equal to the conjugate transpose of the product of the conjugate transposes of the individual matrices. Since the conjugate transpose of a hermitian matrix is itself, the product of two hermitian matrices will also be hermitian.</p><h2>4. What are some real-world applications of hermitian matrices?</h2><p>Hermitian matrices are commonly used in physics, particularly in quantum mechanics, to represent observables and operators. They are also used in signal processing and control theory.</p><h2>5. Are all eigenvalues of a hermitian matrix real numbers?</h2><p>Yes, all eigenvalues of a hermitian matrix are real numbers. This is because the eigenvalues of a hermitian matrix are equal to its diagonal elements, which are always real numbers.</p>

1. What does it mean for a matrix to be hermitian?

A hermitian matrix is a square matrix that is equal to its own conjugate transpose. This means that the elements on the main diagonal are real numbers, and the elements below the main diagonal are the complex conjugates of the elements above the main diagonal.

2. Can a non-square matrix be hermitian?

No, a matrix must be square in order to be considered hermitian. This is because the conjugate transpose operation only applies to square matrices.

3. Is the product of two hermitian matrices always hermitian?

Yes, the product of two hermitian matrices is always hermitian. This is because the product of two matrices is equal to the conjugate transpose of the product of the conjugate transposes of the individual matrices. Since the conjugate transpose of a hermitian matrix is itself, the product of two hermitian matrices will also be hermitian.

4. What are some real-world applications of hermitian matrices?

Hermitian matrices are commonly used in physics, particularly in quantum mechanics, to represent observables and operators. They are also used in signal processing and control theory.

5. Are all eigenvalues of a hermitian matrix real numbers?

Yes, all eigenvalues of a hermitian matrix are real numbers. This is because the eigenvalues of a hermitian matrix are equal to its diagonal elements, which are always real numbers.

Similar threads

  • Quantum Physics
Replies
9
Views
1K
Replies
15
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
2K
Replies
9
Views
1K
Replies
7
Views
14K
Replies
4
Views
3K
  • Linear and Abstract Algebra
Replies
2
Views
494
Replies
5
Views
1K
  • Quantum Physics
Replies
6
Views
1K
Replies
15
Views
2K
Back
Top