# Hermitian Operators?

I am a QM beginner so go easy on me. I have just noticed something. Let $$\hat{O}$$ be an hermitian operator. Then $$\left( \hat { O } \right) ^{ \dagger }\neq \hat { O }$$ when it is by itself. For example $$\left( \hat { p } \right) ^{ \dagger }=i\hbar \frac { \partial }{ \partial x } \neq \hat { p } =-i\hbar \frac { \partial }{ \partial x }$$.

In context with an eigenfunction $$\left( \hat { O } \Psi \right) ^{ \dagger }=\Psi ^{ \ast }\hat { O } ^{ \dagger }\neq \Psi ^{ \ast }\hat { O }$$. However, I guess it still doesn't work unless I give it a $$\Psi$$ to chew on the right. I guess this might finally work: $$\left( \hat { O } \Psi \right) ^{ \dagger }\Psi =\Psi ^{ \ast }\hat { O } ^{ \dagger }\Psi =\Psi ^{ \ast }\hat { O } \Psi$$.

However if I test this with an actualy example: $$\hat { O } =\hat { p } \quad and\quad \Psi ={ e }^{ ikx }\quad then\quad { e }^{ -ikx }(-i\hbar \frac { \partial }{ \partial x } ){ e }^{ ikx }\neq { e }^{ -ikx }(i\hbar \frac { \partial }{ \partial x } ){ e }^{ ikx }$$. Then that does not work as well.

Could someone help clarify what is going on here? In what sense are these operators hermitian. I guess not in the same sense that a matrix is hermitian.

Thanks,
Chris Maness

Related Quantum Physics News on Phys.org
Bill_K
Let $$\hat{O}$$ be an hermitian operator. Then $$\left( \hat { O } \right) ^{ \dagger }\neq \hat { O }$$ when it is by itself. For example $$\left( \hat { p } \right) ^{ \dagger }=i\hbar \frac { \partial }{ \partial x } \neq \hat { p } =-i\hbar \frac { \partial }{ \partial x }$$.
The Hermitian conjugate is the complex conjugate transpose. You've taken the complex conjugate, but not the transpose. In this case, transpose means that the partial derivative acts to the left.

Fredrik
Staff Emeritus
Gold Member
I am a QM beginner so go easy on me. I have just noticed something. Let $$\hat{O}$$ be an hermitian operator. Then $$\left( \hat { O } \right) ^{ \dagger }\neq \hat { O }$$ when it is by itself. For example $$\left( \hat { p } \right) ^{ \dagger }=i\hbar \frac { \partial }{ \partial x } \neq \hat { p } =-i\hbar \frac { \partial }{ \partial x }$$.
This should be
$$\hat p^\dagger =\left(i\hbar\frac{\partial}{\partial x}\right)^\dagger =-i\hbar\underbrace{\left(\frac{\partial}{\partial x}\right)^\dagger}_{\displaystyle=-\frac{\partial}{\partial x}} =\hat p.$$
In context with an eigenfunction $$\left( \hat { O } \Psi \right) ^{ \dagger }=\Psi ^{ \ast }\hat { O } ^{ \dagger }\neq \Psi ^{ \ast }\hat { O }$$.
The function should always be to the right of the operator...and the dagger acts only on the operator, not on the function ##\hat O\psi##.

Bill_K
This should be
$$=\underbrace{\left(\frac{\partial}{\partial x}\right)^\dagger}_{\displaystyle=-\frac{\partial}{\partial x}}$$
... or equivalently, integrating by parts, ∂/∂x acting to the left.

Fredrik
Staff Emeritus
Gold Member
... or equivalently, integrating by parts, ∂/∂x acting to the left.
When we're not doing bra-ket notation, it seems very strange to think of any operator as "acting to the left". But yes, integration by parts is what we use in the standard non-rigorous argument for ##D^\dagger=-D## (where D is the operator that takes a differentiable function to its derivative).

I see what expres satisfies my little thought experiment. However, I am having a hard time understanding this equation in terms of your insight: $$\frac{\partial \Phi^*}{\partial t} = -\frac{1}{i\hbar}\Phi^*H^* = -\frac{1}{i\hbar}\Phi^*H$$. This is from "The Schrödinger picture of the Ehrenfest Theorem" on the Wikipedia article on that topic. Is it wrong? I have used it to work out several problems correctly, but now considering this new information and my failed thought experiments (proofs I guess) my head is reeling.

Thanks,
Chris Maness

Fredrik
Staff Emeritus
Gold Member
The issue here is whether it's OK to write ##(Af)^* =f^*A^\dagger##. I'm inclined to say no, but I guess it depends on your tolerance for nonsensical mathematical expressions that can be made sense of by assigning a specific meaning to the notation. As you can tell by my choice of words, my tolerance is rather low. The only way I see to make sense of this equality is to interpret it as an abbreviated version of the statement
$$\int (Af)^*(x)g(x)\mathrm dx =\int f^*(x)(A^\dagger g)(x)\mathrm dx,~~ \text{for all g}.$$ You can see that this holds by noting that the left-hand side is equal to ##\langle Af,g\rangle##, and the right-hand side is equal to ##\langle f,A^\dagger g\rangle##. By definition of the ##\dagger## operation, these two numbers are equal for all g.

The issue here is whether it's OK to write ##(Af)^* =f^*A^\dagger##
I am seeing now how that is a bit funky. For example if I kept the identities going pretending these behave analogous to hermitian matrices which is probably why I am having a hard time with this in the first place.

If this were the case it continues like so: ##(Af)^* =f^*A^\dagger=f^*A## if A is hermitian. Argh!

$$\int (Af)^*(x)g(x)\mathrm dx =\int f^*(x)(A^\dagger g)(x)\mathrm dx,~~ \text{for all g}.$$
I have seen this one around searching on this topic, but I get NO intuitive warm and fuzzy satisfaction from it at all. If I used it, it would just be by the force of definition -- which does not build my confidence in reaching for it.

Thanks,
Chris Maness

stevendaryl
Staff Emeritus
I have seen this one around searching on this topic, but I get NO intuitive warm and fuzzy satisfaction from it at all. If I used it, it would just be by the force of definition -- which does not build my confidence in reaching for it.

Thanks,
Chris Maness
It's a little more intuitive if you see how it relates to the analogous fact about matrices. Are you familiar with matrices? If $f$ and $g$ are column matrices, and $A$ is a square matrix (and the dimensions are appropriate) then you can define a product of the three of them as follows:

$f^\dagger (A g)$

where $f^\dagger$ is the complex conjugate of the transpose of $f$.

It's pretty simple to prove from the properties of matrix multiplication and transposition, that

$f^\dagger (A^\dagger g) = (A f)^\dagger g$

In terms of components, this says:
$\sum_i (f^*_i (A^\dagger g)_i) = \sum_i (A f)^*_i g_i$

Hilbert spaces of functions can be thought of intuitively as a generalization of matrices to the case where the "indices" are continuous. So instead of $\sum_i$ you have $\int dx$. So the identity becomes:

$\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)$

1 person
The only remaining question is why astrix instead of daggers in Hilbert space?

$$\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)$$

Chris

bhobba
Mentor
Hilbert spaces of functions can be thought of intuitively as a generalization of matrices to the case where the "indices" are continuous. So instead of $\sum_i$ you have $\int dx$. So the identity becomes:

$\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)$
It can almost certainly be made rigorous by means of the Rigged Hilbert space formalism.

But for the details one must consult specialist tomes. I did at one time - wont do that again.

Without going that deep into it Halls book probably resolves it:
https://www.amazon.com/dp/146147115X/?tag=pfamazon01-20&tag=pfamazon01-20

Thanks
Bill

Last edited by a moderator:
stevendaryl
Staff Emeritus
The only remaining question is why astrix instead of daggers in Hilbert space?
$$\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)$$
Chris
I'm not sure what your question means. Both are used in working with Hilbert space.

The basic inner product in Hilbert space is:

$\int f^*(x) g(x) dx$

This is analogous to the inner product for column matrices:

$f^\dagger g$

So see that they are exactly analogous, write the latter out in terms of components:

$f^\dagger g = \sum_i f^*_i g_i$

On the right side, there are no daggers, because the matrix components $f_i$ are just complex numbers.

Similarly, for a particular value of $x$, $f(x)$ is just a complex number. You can take its complex conjugate, $f(x)^*$, but using a dagger doesn't make any difference: dagger applied to a number is the same as asterisk.

The distinction, that people are not used to making, is between a function, $f$, which is an infinite object, and the value $f(x)$ of a function at a particular value of $x$. This is like the distinction between a column matrix $f$ and one of its components, $f_i$.

Fredrik
Staff Emeritus
Gold Member
The only remaining question is why astrix instead of daggers in Hilbert space?

$$\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)$$

Chris
The dagger is the adjoint operation. It takes operators to operators. The asterisk is the complex conjugate of a function. The complex conjugate of a function ##f:\mathcal H\to\mathbb C## is the function ##f^*:\mathcal H\to\mathbb C## defined by ##f^*(x)=(f(x))^*## for all x in the domain of f. In the formula quoted above, ##\dagger## acts on the operator ##A##, and ##*## acts on the functions ##f## and ##Af##.

The formula holds because
$$\int f^*(x) (A^\dagger g)(x) dx =\langle f,A^\dagger g\rangle =\langle (A^\dagger)^\dagger f,g\rangle =\langle Af,g\rangle = \int (A f)^*(x) g(x).$$

stevendaryl
Staff Emeritus
The dagger is the adjoint operation. It takes operators to operators. The asterisk is the complex conjugate of a function. The complex conjugate of a function ##f:\mathcal H\to\mathbb C## is the function ##f^*:\mathcal H\to\mathbb C## defined by ##f^*(x)=(f(x))^*## for all x in the domain of f. In the formula quoted above, ##\dagger## acts on the operator ##A##, and ##*## acts on the functions ##f## and ##Af##.

The formula holds because
$$\int f^*(x) (A^\dagger g)(x) dx =\langle f,A^\dagger g\rangle =\langle (A^\dagger)^\dagger f,g\rangle =\langle Af,g\rangle = \int (A f)^*(x) g(x).$$
This might be a nit-picky mathematical point that I should keep my mouth shut about, but there certainly can be an adjoint of a function. If $f$ is a function, then $f^\dagger$ is a functional--a function of functions that takes a function and returns a scalar. Mathematically, the action of the functional $f^\dagger$ on a function $g$ is defined by:

$f^\dagger(g) = \int f^*(x) g(x) dx$

So the $^\dagger$ operation acts on functions to return functionals, while the $^*$ acts on complex numbers to return complex numbers.

Fredrik
Staff Emeritus
Gold Member
In bra-ket terminology, you're defining the adjoint of a ket as the corresponding bra, i.e. ##f^\dagger =\langle f,\cdot\rangle##. I have seen that before (here at PF), but I have never found it useful.

stevendaryl
Staff Emeritus
This might be a nit-picky mathematical point that I should keep my mouth shut about, but there certainly can be an adjoint of a function. If $f$ is a function, then $f^\dagger$ is a functional--a function of functions that takes a function and returns a scalar. Mathematically, the action of the functional $f^\dagger$ on a function $g$ is defined by:

$f^\dagger(g) = \int f^*(x) g(x) dx$

So the $^\dagger$ operation acts on functions to return functionals, while the $^*$ acts on complex numbers to return complex numbers.
So the equation

$(A f)^\dagger = f^\dagger A^\dagger$

actually is not nonsense. It means that that as a functional, $(A f)^\dagger$ is the same as $f^\dagger A^\dagger$. Functionals are equal if they give the same value on every argument:

$(A f)^\dagger(g) = f^\dagger(A^\dagger(g))$

stevendaryl
Staff Emeritus
In bra-ket terminology, you're defining the adjoint of a ket as the corresponding bra, i.e. ##f^\dagger =\langle f,\cdot\rangle##. I have seen that before (here at PF), but I have never found it useful.
Well, it does make identities such as the one the original poster was asking about almost trivial.

So the equation

$(A f)^\dagger = f^\dagger A^\dagger$
So for a Matrix this is obvious to me. Let H be any hermitian matrix $$(HM)^{\dagger}=M^{\dagger}H^{\dagger}=M^{\dagger}H$$

So in my mind, if the quoted equation is a true analog of matrix operation, the identity should continue to:

$(A f)^\dagger = f^\dagger A^\dagger=f^\dagger A$

Does it?

Thanks,
Chris

stevendaryl
Staff Emeritus
So for a Matrix this is obvious to me. Let H be any hermitian matrix $$(HM)^{\dagger}=M^{\dagger}H^{\dagger}=M^{\dagger}H$$

So in my mind, if the quoted equation is a true analog of matrix operation, the identity should continue to:

$(A f)^\dagger = f^\dagger A^\dagger=f^\dagger A$

Does it?

Thanks,
Chris
Yes, in the sense that the far right side and the far left side produce the same result when applied to an argument (a square-integrable function) $g$:

$(A f)^\dagger g = (f^\dagger A) g = f^\dagger (A g)$

In terms of integrals, this is a compact way of writing:

$\int (A f)^*(x) g(x) dx = \int f^*(x) (A g)(x) dx$

So it works in the sense that it really needs to be in the context of the integral because the integral is to the function what the summation is to the matrix element. Correct?

Chris

stevendaryl
Staff Emeritus
So it works in the sense that it really needs to be in the context of the integral because the integral is to the function what the summation is to the matrix element. Correct?

Chris
Definitely. The understanding of functions as "vectors" in the Hilbert space, and $\dagger$ as the adjoint requires that you understand $f^\dagger g$ as an integral:

$f^\dagger g = \int f^*(x) g(x) dx$

[EDIT]
I keep bringing up the analogy with matrices. In matrices, the meaning of matrix multiplication is summation:
$f^\dagger g = \sum_i f^*_i g_i$. Integration is sort of the generalization of this to a continuous index, $x$, instead of a discrete index, $i$.

Last edited:
1 person
Ok, thank you. $$mind \Rightarrow blown$$

Chris

Here is a handy little identity that I have used to solve a homework problem. I saw on several authoritative sites on self-adjunct operators:

$$\left< { \hat{O}f }|{g } \right> =\left< { f }|{ \hat{O}g } \right>$$ If O hat is self-adjoint.

However, I tried this little mathematical experiment to see if it works:

Let $$\Psi=e^{ikx}$$, and $$\left< { \hat { p } \Psi }|{ \Psi } \right> =\left< { \Psi }|{ \hat { p } \Psi } \right>$$

The right side equals:

$$\left< { -i\hbar \frac { \partial }{ \partial x } \Psi }|{ \Psi } \right> =\left< { -i\hbar (-i)\Psi }|{ \Psi } \right> =-\hbar \left< { \Psi }|{ \Psi } \right> =-\hbar$$

The left side equals:

$$\left< { \Psi }|{ -i\hbar \frac { \partial }{ \partial x } \Psi } \right> =-i\hbar \left< { \Psi }|{ i\Psi } \right> =\hbar \left< { \Psi }|{ \Psi } \right> =\hbar$$

It no worky, the L.S. and the R.S. are not identical. Where did I go wrong here?

Thanks,
Chris

Fredrik
Staff Emeritus
Gold Member
However, I tried this little mathematical experiment to see if it works:

Let $$\Psi=e^{ikx}$$, and $$\left< { \hat { p } \Psi }|{ \Psi } \right> =\left< { \Psi }|{ \hat { p } \Psi } \right>$$

The right side equals:

$$\left< { -i\hbar \frac { \partial }{ \partial x } \Psi }|{ \Psi } \right> =\left< { -i\hbar (-i)\Psi }|{ \Psi } \right> =-\hbar \left< { \Psi }|{ \Psi } \right> =-\hbar$$
What you wrote before the first equality looks like what you had on the left, not on the right. I don't know what you're doing next, but the derivative isn't going to disappear. (Edit: Ahh...after seing strangrep's reply, I noticed that you assumed that ##\Psi=e^{ikx}##. OK, in that case d/dx is going to pull out a factor of ik from ##\Psi##).

The left side equals:
Are you sure you're not confusing left with right? Remember, on your left hand, the thumb is to the right.

To verify that ##\hat p## is self-adjoint, you should use the definition of this particular inner product to prove that ##\langle\hat p f,f\rangle=\langle f,\hat pf\rangle## for all square-integrable functions f. Actually, it doesn't have to be the same function on both sides. You can prove that ##\langle\hat p f,g\rangle=\langle f,\hat pg\rangle## for all square-integrable f,g.

Hint: integration by parts, or equivalently, use the product rule in the form ##f'g=(fg)'-fg'##.

(You can of course continue to denote the function by ##\Psi## if you want. I'm using f mainly because it's easier to type).

Last edited:
Hello,

The momentum operator needs to be treated quite carefully (as does everything, ha ha). Its self-adjointness and eigenvalue spectrum depend very strongly on the boundary conditions of the states. I would suggest writing out those inner products in integral form and very carefully going through the computation. You will see when you have to make some assumption on the boundary condition.

EDIT: Fredrik beat me to it.

Last edited: