Understanding Hermitian Operators for QM Beginners

Click For Summary
Hermitian operators in quantum mechanics are defined such that their adjoint equals the operator itself when applied to eigenfunctions. However, confusion arises when considering the adjoint of the operator alone, as it does not equal the operator without context. The discussion highlights the importance of understanding how operators act on functions, particularly in terms of their adjoint properties. The distinction between the adjoint operation (denoted by a dagger) and the complex conjugate (denoted by an asterisk) is crucial, as they apply to different mathematical entities. Clarifying these concepts is essential for beginners to grasp the behavior of Hermitian operators in quantum mechanics.
kq6up
Messages
366
Reaction score
13
I am a QM beginner so go easy on me. I have just noticed something. Let $$\hat{O}$$ be an hermitian operator. Then $$\left( \hat { O } \right) ^{ \dagger }\neq \hat { O } $$ when it is by itself. For example $$\left( \hat { p } \right) ^{ \dagger }=i\hbar \frac { \partial }{ \partial x } \neq \hat { p } =-i\hbar \frac { \partial }{ \partial x }$$.

In context with an eigenfunction $$ \left( \hat { O } \Psi \right) ^{ \dagger }=\Psi ^{ \ast }\hat { O } ^{ \dagger }\neq \Psi ^{ \ast }\hat { O } $$. However, I guess it still doesn't work unless I give it a $$\Psi$$ to chew on the right. I guess this might finally work: $$ \left( \hat { O } \Psi \right) ^{ \dagger }\Psi =\Psi ^{ \ast }\hat { O } ^{ \dagger }\Psi =\Psi ^{ \ast }\hat { O } \Psi $$.

However if I test this with an actualy example: $$\hat { O } =\hat { p } \quad and\quad \Psi ={ e }^{ ikx }\quad then\quad { e }^{ -ikx }(-i\hbar \frac { \partial }{ \partial x } ){ e }^{ ikx }\neq { e }^{ -ikx }(i\hbar \frac { \partial }{ \partial x } ){ e }^{ ikx }$$. Then that does not work as well.

Could someone help clarify what is going on here? In what sense are these operators hermitian. I guess not in the same sense that a matrix is hermitian.

Thanks,
Chris Maness
 
Physics news on Phys.org
kq6up said:
Let $$\hat{O}$$ be an hermitian operator. Then $$\left( \hat { O } \right) ^{ \dagger }\neq \hat { O } $$ when it is by itself. For example $$\left( \hat { p } \right) ^{ \dagger }=i\hbar \frac { \partial }{ \partial x } \neq \hat { p } =-i\hbar \frac { \partial }{ \partial x }$$.
The Hermitian conjugate is the complex conjugate transpose. You've taken the complex conjugate, but not the transpose. In this case, transpose means that the partial derivative acts to the left.
 
kq6up said:
I am a QM beginner so go easy on me. I have just noticed something. Let $$\hat{O}$$ be an hermitian operator. Then $$\left( \hat { O } \right) ^{ \dagger }\neq \hat { O } $$ when it is by itself. For example $$\left( \hat { p } \right) ^{ \dagger }=i\hbar \frac { \partial }{ \partial x } \neq \hat { p } =-i\hbar \frac { \partial }{ \partial x }$$.
This should be
$$\hat p^\dagger =\left(i\hbar\frac{\partial}{\partial x}\right)^\dagger =-i\hbar\underbrace{\left(\frac{\partial}{\partial x}\right)^\dagger}_{\displaystyle=-\frac{\partial}{\partial x}} =\hat p.$$
kq6up said:
In context with an eigenfunction $$ \left( \hat { O } \Psi \right) ^{ \dagger }=\Psi ^{ \ast }\hat { O } ^{ \dagger }\neq \Psi ^{ \ast }\hat { O } $$.
The function should always be to the right of the operator...and the dagger acts only on the operator, not on the function ##\hat O\psi##.
 
Fredrik said:
This should be
$$=\underbrace{\left(\frac{\partial}{\partial x}\right)^\dagger}_{\displaystyle=-\frac{\partial}{\partial x}}$$
... or equivalently, integrating by parts, ∂/∂x acting to the left.
 
Bill_K said:
... or equivalently, integrating by parts, ∂/∂x acting to the left.
When we're not doing bra-ket notation, it seems very strange to think of any operator as "acting to the left". But yes, integration by parts is what we use in the standard non-rigorous argument for ##D^\dagger=-D## (where D is the operator that takes a differentiable function to its derivative).
 
I see what expres satisfies my little thought experiment. However, I am having a hard time understanding this equation in terms of your insight: $$\frac{\partial \Phi^*}{\partial t} = -\frac{1}{i\hbar}\Phi^*H^* = -\frac{1}{i\hbar}\Phi^*H$$. This is from "The Schrödinger picture of the Ehrenfest Theorem" on the Wikipedia article on that topic. Is it wrong? I have used it to work out several problems correctly, but now considering this new information and my failed thought experiments (proofs I guess) my head is reeling.

Thanks,
Chris Maness
 
The issue here is whether it's OK to write ##(Af)^* =f^*A^\dagger##. I'm inclined to say no, but I guess it depends on your tolerance for nonsensical mathematical expressions that can be made sense of by assigning a specific meaning to the notation. As you can tell by my choice of words, my tolerance is rather low. The only way I see to make sense of this equality is to interpret it as an abbreviated version of the statement
$$\int (Af)^*(x)g(x)\mathrm dx =\int f^*(x)(A^\dagger g)(x)\mathrm dx,~~ \text{for all g}.$$ You can see that this holds by noting that the left-hand side is equal to ##\langle Af,g\rangle##, and the right-hand side is equal to ##\langle f,A^\dagger g\rangle##. By definition of the ##\dagger## operation, these two numbers are equal for all g.
 
Fredrik said:
The issue here is whether it's OK to write ##(Af)^* =f^*A^\dagger##

I am seeing now how that is a bit funky. For example if I kept the identities going pretending these behave analogous to hermitian matrices which is probably why I am having a hard time with this in the first place.

If this were the case it continues like so: ##(Af)^* =f^*A^\dagger=f^*A## if A is hermitian. Argh!

$$\int (Af)^*(x)g(x)\mathrm dx =\int f^*(x)(A^\dagger g)(x)\mathrm dx,~~ \text{for all g}.$$

I have seen this one around searching on this topic, but I get NO intuitive warm and fuzzy satisfaction from it at all. If I used it, it would just be by the force of definition -- which does not build my confidence in reaching for it.

Thanks,
Chris Maness
 
kq6up said:
I have seen this one around searching on this topic, but I get NO intuitive warm and fuzzy satisfaction from it at all. If I used it, it would just be by the force of definition -- which does not build my confidence in reaching for it.

Thanks,
Chris Maness

It's a little more intuitive if you see how it relates to the analogous fact about matrices. Are you familiar with matrices? If f and g are column matrices, and A is a square matrix (and the dimensions are appropriate) then you can define a product of the three of them as follows:

f^\dagger (A g)

where f^\dagger is the complex conjugate of the transpose of f.

It's pretty simple to prove from the properties of matrix multiplication and transposition, that

f^\dagger (A^\dagger g) = (A f)^\dagger g

In terms of components, this says:
\sum_i (f^*_i (A^\dagger g)_i) = \sum_i (A f)^*_i g_i

Hilbert spaces of functions can be thought of intuitively as a generalization of matrices to the case where the "indices" are continuous. So instead of \sum_i you have \int dx. So the identity becomes:

\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)
 
  • Like
Likes 1 person
  • #10
The only remaining question is why astrix instead of daggers in Hilbert space?

$$\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)$$

Chris
 
  • #11
stevendaryl said:
Hilbert spaces of functions can be thought of intuitively as a generalization of matrices to the case where the "indices" are continuous. So instead of \sum_i you have \int dx. So the identity becomes:

\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)

It can almost certainly be made rigorous by means of the Rigged Hilbert space formalism.

But for the details one must consult specialist tomes. I did at one time - won't do that again.

Without going that deep into it Halls book probably resolves it:
https://www.amazon.com/dp/146147115X/?tag=pfamazon01-20

Thanks
Bill
 
Last edited by a moderator:
  • #12
kq6up said:
The only remaining question is why astrix instead of daggers in Hilbert space?
$$\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)$$
Chris

I'm not sure what your question means. Both are used in working with Hilbert space.

The basic inner product in Hilbert space is:

\int f^*(x) g(x) dx

This is analogous to the inner product for column matrices:

f^\dagger g

So see that they are exactly analogous, write the latter out in terms of components:

f^\dagger g = \sum_i f^*_i g_i

On the right side, there are no daggers, because the matrix components f_i are just complex numbers.

Similarly, for a particular value of x, f(x) is just a complex number. You can take its complex conjugate, f(x)^*, but using a dagger doesn't make any difference: dagger applied to a number is the same as asterisk.

The distinction, that people are not used to making, is between a function, f, which is an infinite object, and the value f(x) of a function at a particular value of x. This is like the distinction between a column matrix f and one of its components, f_i.
 
  • #13
kq6up said:
The only remaining question is why astrix instead of daggers in Hilbert space?

$$\int f^*(x) (A^\dagger g)(x) dx = \int (A f)^*(x) g(x)$$

Chris
The dagger is the adjoint operation. It takes operators to operators. The asterisk is the complex conjugate of a function. The complex conjugate of a function ##f:\mathcal H\to\mathbb C## is the function ##f^*:\mathcal H\to\mathbb C## defined by ##f^*(x)=(f(x))^*## for all x in the domain of f. In the formula quoted above, ##\dagger## acts on the operator ##A##, and ##*## acts on the functions ##f## and ##Af##.

The formula holds because
$$\int f^*(x) (A^\dagger g)(x) dx =\langle f,A^\dagger g\rangle =\langle (A^\dagger)^\dagger f,g\rangle =\langle Af,g\rangle = \int (A f)^*(x) g(x).$$
 
  • #14
Fredrik said:
The dagger is the adjoint operation. It takes operators to operators. The asterisk is the complex conjugate of a function. The complex conjugate of a function ##f:\mathcal H\to\mathbb C## is the function ##f^*:\mathcal H\to\mathbb C## defined by ##f^*(x)=(f(x))^*## for all x in the domain of f. In the formula quoted above, ##\dagger## acts on the operator ##A##, and ##*## acts on the functions ##f## and ##Af##.

The formula holds because
$$\int f^*(x) (A^\dagger g)(x) dx =\langle f,A^\dagger g\rangle =\langle (A^\dagger)^\dagger f,g\rangle =\langle Af,g\rangle = \int (A f)^*(x) g(x).$$

This might be a nit-picky mathematical point that I should keep my mouth shut about, but there certainly can be an adjoint of a function. If f is a function, then f^\dagger is a functional--a function of functions that takes a function and returns a scalar. Mathematically, the action of the functional f^\dagger on a function g is defined by:

f^\dagger(g) = \int f^*(x) g(x) dx

So the ^\dagger operation acts on functions to return functionals, while the ^* acts on complex numbers to return complex numbers.
 
  • #15
In bra-ket terminology, you're defining the adjoint of a ket as the corresponding bra, i.e. ##f^\dagger =\langle f,\cdot\rangle##. I have seen that before (here at PF), but I have never found it useful.
 
  • #16
stevendaryl said:
This might be a nit-picky mathematical point that I should keep my mouth shut about, but there certainly can be an adjoint of a function. If f is a function, then f^\dagger is a functional--a function of functions that takes a function and returns a scalar. Mathematically, the action of the functional f^\dagger on a function g is defined by:

f^\dagger(g) = \int f^*(x) g(x) dx

So the ^\dagger operation acts on functions to return functionals, while the ^* acts on complex numbers to return complex numbers.

So the equation

(A f)^\dagger = f^\dagger A^\dagger

actually is not nonsense. It means that that as a functional, (A f)^\dagger is the same as f^\dagger A^\dagger. Functionals are equal if they give the same value on every argument:

(A f)^\dagger(g) = f^\dagger(A^\dagger(g))
 
  • #17
Fredrik said:
In bra-ket terminology, you're defining the adjoint of a ket as the corresponding bra, i.e. ##f^\dagger =\langle f,\cdot\rangle##. I have seen that before (here at PF), but I have never found it useful.

Well, it does make identities such as the one the original poster was asking about almost trivial.
 
  • #18
stevendaryl said:
So the equation

(A f)^\dagger = f^\dagger A^\dagger

So for a Matrix this is obvious to me. Let H be any hermitian matrix $$(HM)^{\dagger}=M^{\dagger}H^{\dagger}=M^{\dagger}H$$

So in my mind, if the quoted equation is a true analog of matrix operation, the identity should continue to:

(A f)^\dagger = f^\dagger A^\dagger=f^\dagger A

Does it?

Thanks,
Chris
 
  • #19
kq6up said:
So for a Matrix this is obvious to me. Let H be any hermitian matrix $$(HM)^{\dagger}=M^{\dagger}H^{\dagger}=M^{\dagger}H$$

So in my mind, if the quoted equation is a true analog of matrix operation, the identity should continue to:

(A f)^\dagger = f^\dagger A^\dagger=f^\dagger A

Does it?

Thanks,
Chris

Yes, in the sense that the far right side and the far left side produce the same result when applied to an argument (a square-integrable function) g:

(A f)^\dagger g = (f^\dagger A) g = f^\dagger (A g)

In terms of integrals, this is a compact way of writing:

\int (A f)^*(x) g(x) dx = \int f^*(x) (A g)(x) dx
 
  • #20
So it works in the sense that it really needs to be in the context of the integral because the integral is to the function what the summation is to the matrix element. Correct?

Chris
 
  • #21
kq6up said:
So it works in the sense that it really needs to be in the context of the integral because the integral is to the function what the summation is to the matrix element. Correct?

Chris

Definitely. The understanding of functions as "vectors" in the Hilbert space, and \dagger as the adjoint requires that you understand f^\dagger g as an integral:

f^\dagger g = \int f^*(x) g(x) dx

[EDIT]
I keep bringing up the analogy with matrices. In matrices, the meaning of matrix multiplication is summation:
f^\dagger g = \sum_i f^*_i g_i. Integration is sort of the generalization of this to a continuous index, x, instead of a discrete index, i.
 
Last edited:
  • Like
Likes 1 person
  • #22
Ok, thank you. $$mind \Rightarrow blown$$

Chris
 
  • #23
Here is a handy little identity that I have used to solve a homework problem. I saw on several authoritative sites on self-adjunct operators:

$$\left< { \hat{O}f }|{g } \right> =\left< { f }|{ \hat{O}g } \right> $$ If O hat is self-adjoint.

However, I tried this little mathematical experiment to see if it works:

Let $$\Psi=e^{ikx}$$, and $$\left< { \hat { p } \Psi }|{ \Psi } \right> =\left< { \Psi }|{ \hat { p } \Psi } \right> $$

The right side equals:

$$\left< { -i\hbar \frac { \partial }{ \partial x } \Psi }|{ \Psi } \right> =\left< { -i\hbar (-i)\Psi }|{ \Psi } \right> =-\hbar \left< { \Psi }|{ \Psi } \right> =-\hbar $$

The left side equals:

$$\left< { \Psi }|{ -i\hbar \frac { \partial }{ \partial x } \Psi } \right> =-i\hbar \left< { \Psi }|{ i\Psi } \right> =\hbar \left< { \Psi }|{ \Psi } \right> =\hbar $$

It no worky, the L.S. and the R.S. are not identical. Where did I go wrong here?

Thanks,
Chris
 
  • #24
kq6up said:
However, I tried this little mathematical experiment to see if it works:

Let $$\Psi=e^{ikx}$$, and $$\left< { \hat { p } \Psi }|{ \Psi } \right> =\left< { \Psi }|{ \hat { p } \Psi } \right> $$

The right side equals:

$$\left< { -i\hbar \frac { \partial }{ \partial x } \Psi }|{ \Psi } \right> =\left< { -i\hbar (-i)\Psi }|{ \Psi } \right> =-\hbar \left< { \Psi }|{ \Psi } \right> =-\hbar $$
What you wrote before the first equality looks like what you had on the left, not on the right. I don't know what you're doing next, but the derivative isn't going to disappear. (Edit: Ahh...after seing strangrep's reply, I noticed that you assumed that ##\Psi=e^{ikx}##. OK, in that case d/dx is going to pull out a factor of ik from ##\Psi##).

kq6up said:
The left side equals:
Are you sure you're not confusing left with right? Remember, on your left hand, the thumb is to the right. :wink:

To verify that ##\hat p## is self-adjoint, you should use the definition of this particular inner product to prove that ##\langle\hat p f,f\rangle=\langle f,\hat pf\rangle## for all square-integrable functions f. Actually, it doesn't have to be the same function on both sides. You can prove that ##\langle\hat p f,g\rangle=\langle f,\hat pg\rangle## for all square-integrable f,g.

Hint: integration by parts, or equivalently, use the product rule in the form ##f'g=(fg)'-fg'##.

(You can of course continue to denote the function by ##\Psi## if you want. I'm using f mainly because it's easier to type).
 
Last edited:
  • #25
Hello,

The momentum operator needs to be treated quite carefully (as does everything, ha ha). Its self-adjointness and eigenvalue spectrum depend very strongly on the boundary conditions of the states. I would suggest writing out those inner products in integral form and very carefully going through the computation. You will see when you have to make some assumption on the boundary condition.

EDIT: Fredrik beat me to it.
 
Last edited:
  • #26
I did the proof for the general version with the integrals. It does work. These things are very tricky indeed. It is the most nuanced math I have ever done.

Regards,
Chris
 
  • #27
kq6up said:
$$\Psi = e^{ikx}$$ [...]
The right side equals:
$$\left< { -i\hbar \frac { \partial }{ \partial x } \Psi }|{ \Psi } \right> =\left< { -i\hbar (-i)\Psi }|{ \Psi } \right> = [\cdots] $$
The 1st step looks wrong. You've taken the derivative of ##\Psi^\dagger##, not ##\Psi##.

But perhaps you've already seen this when you did it in terms of an integral?

(Oh, and I think you left out a "k", but that's not the problem.)
 
  • #28
But when it's in the bra isn't it supposed to be the conjugate?

Chris
 
  • #29
kq6up said:
But when it's in the bra isn't it supposed to be the conjugate?
No. The difference between the bra and the ket are denoted by the brackets, not by what's inside. Using your notation, | \Psi \rangle = |e^{ikx} \rangle and \langle \Psi | = \langle e^{ikx} |. The conjugate comes in when you write \langle \Phi | \Psi \rangle as an integral.

But your notation is strange. e^{ikx} implies that you are working in the position or momentum basis while | \Psi \rangle denotes a general ket vector. The standard notation for the connection between these concepts is e^{ikx} = \langle x | \Psi \rangle =: \Psi(x) where \Psi(x) is called the wavefunction.
 
  • #30
kq6up said:
But when it's in the bra isn't it supposed to be the conjugate?

Chris
No. If A is an operator and f a square integrable function, then Af means the same thing no matter where you put it.

The bra that corresponds to ##A|\alpha\rangle## is ##\langle\alpha|A^\dagger##, but I see no place to use that in your calculation.

What does a product of a bra and an operator even mean? It's defined by
$$\big(\langle\alpha|A\big)|\beta\rangle =\langle\alpha|\big(A|\beta\rangle\big).$$ The product ##\langle\alpha|A## is defined by saying that this equality holds for all ##|\beta>##.

The bra that corresponds to an arbitrary vector (i.e. function) f is the map ##g\mapsto \langle f,g\rangle##, which can be written as ##\langle f,\cdot\rangle##. So the bra that corresponds to Af is the map ##\langle Af,\cdot\rangle##. This map takes an arbitrary ##g## to ##\langle Af,g\rangle##, which is equal to ##\langle f,A^\dagger g\rangle##. In bra-ket notation, the map that takes ##g## to ##\langle f,A^\dagger g\rangle## is written as ##\langle f|A^\dagger##. In inner product notation, we'd have to use something super ugly like ##\langle f,A^\dagger(\cdot)\rangle## or just invent a new symbol for it.
 

Similar threads

  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 56 ·
2
Replies
56
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 32 ·
2
Replies
32
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K