Quantum particle in a magnetic field

Click For Summary
The discussion revolves around the Hamiltonian of a charged particle in an electromagnetic field, focusing on deriving the current density from the continuity equation. Participants explore the implications of the Schrödinger equation and the correct application of the nabla operator on wavefunctions. Key points include the necessity of careful manipulation of operators and the importance of maintaining the integrity of complex functions during differentiation. A participant successfully reformulates their approach to derive the current density, while others provide insights on operator algebra and the handling of derivatives in quantum mechanics. The conversation emphasizes the complexity of quantum mechanics and the need for precision in mathematical expressions.
  • #31
the operator in position space is just the partial derivative, and partial derivative will act on \mathbb{R}^{3} functions such as A, as it does on wavefunctions too, because you have x-dependence.
You can't really say that :
\partial G(x) \psi(x) = G(x) \partial \psi(x) can you?
Check the [x,p] I gave above.

In your case then, still the identity remains untouched- as it should be- but the elements are acted on.

Of course you can work in any representation you like, and get the result, but the result is obvious in position repr, where you have to take just the derivative of a function.

With the identity thing then, I find it quite difficult to see the [x,p]
[x,p]= xp- px
if I write f(x)=x I and follow your logic - identity will commute with p- I will get another commutation relation:
[x,p]= xI p - xI p =0
 
Last edited:
Physics news on Phys.org
  • #32
ChrisVer said:
the operator in position space is just the partial derivative, and partial derivative will act on \mathbb{R}^{3} functions such as A, as it does on wavefunctions too, because you have x-dependence.
I disagree. Consider ##(\nabla+A(x))^2##. We have
\begin{align}
(\nabla+A(x))^2=(\nabla+A(x))_i(\nabla+A(x))_i =(\partial_i+A_i(x))(\partial_i+A_i(x)) =\partial_i\partial_i+\partial_i A(x)+A(x)\partial_i+A_i(x)A_i(x)
\end{align} Now let's just consider the second term. It's a product of two operators, and products are defined by ##(AB)f=A(Bf)## for all f. So for all f, we have
$$(\partial_i A_i(x))f=\partial_i(A_i(x) f).$$ What this means is that for all y, we have
$$\big((\partial_i A_i(x))f\big)(y) = \partial_i(A_i(x) f)(y).$$ The right-hand side is the value at y of the partial derivative of the function A_i(x)f, i.e. the function ##z\mapsto (A_i(x)f)(z)##. It's not equal to
$$\frac{\partial}{\partial x_i}\big(A_i(x)f(x)\big)$$ so the product rule doesn't apply. Sure, we have "x dependence", but we would need "y dependence".

To evaluate ##(\partial_i A_i(x))f##, we have to know the proper way to make sense of ##A_i(x)##. This is above my pay grade. In the other thread, I made a very non-rigorous argument for why we should have ##(\partial_i A_i(x))f =(\partial_i A_i)(x)f +A_i(x)f## when ##A_i## is an operator-valued function. Obviously, the only thing non-rigorous arguments are good for is that they can help us guess what definitions may be useful and what statements we should try to prove.

ChrisVer said:
With the identity thing then, I find it quite difficult to see the [x,p]
[x,p]= xp- px
if I write f(x)=x I and follow your logic - identity will commute with p- I will get another commutation relation:
[x,p]= xI p - xI p =0
That x is definitely the position operator, defined by ##(\hat x f)(x)=xf(x)## for all f and all x. It's not a real-valued function times the identity operator.
 
Last edited:
  • #33
I haven't been through the whole thread in detail, but it seems to me that maybe there's some confusion about operators and their representatives in the x-representation.

Consider the following x-representation calculation where X=position operator and x=plain old x-coordinate:
<br /> \begin{eqnarray}<br /> \langle x \lvert PA(X) \vert \psi \rangle &amp;=&amp; \int \langle x \lvert PA(X) \lvert x&#039; \rangle \langle x&#039; \lvert \psi \rangle dx&#039;\\<br /> &amp;=&amp; \int \langle x \lvert PA(x&#039;) \lvert x&#039; \rangle \langle x&#039; \lvert \psi \rangle dx&#039;\\<br /> &amp;=&amp; \int A(x&#039;) \langle x \lvert P \lvert x&#039; \rangle \langle x&#039; \lvert \psi \rangle dx&#039;\\<br /> &amp;=&amp; \int A(x&#039;) [-i\hbar \delta&#039;(x-x&#039;)] \langle x&#039; \lvert \psi \rangle dx&#039;\\<br /> &amp;=&amp; -i\hbar \frac{d}{dx}[A(x)\psi(x)]<br /> \end{eqnarray}<br />
The moral of the story being that although the final rule involves differentiating the function A(x), at no point do we ever have to worry about "differentiating operators".
 
  • #34
I don't really understand what you are struggling to say Fredrik.. sorry...
Why would you write for a function:
A(x) f(x) = h(y)
And not just say that:
A(x) f(x)= h(x)
?

The partial derivative then \partial h(x) = h&#039;(x) remains a function of x...

Maybe we are losing it in A(x)... but in my viewpoint, when A(\hat{x}) will act on \psi(x), you will have just A(x) a function.
If you have problems with A(x) I and it being able to commute with all operators... well this is somewhat wrong. If your operator has the property to change the spatial component, then that's know right, obviously...why? because your final function, which \partial will act on is h(x)
the derivative is just the:

\lim_{\epsilon \rightarrow 0} \frac{ h(x+ \epsilon) - h(x)}{\epsilon}

=\lim_{\epsilon \rightarrow 0} \frac{ A(x+ \epsilon) f(x+ \epsilon) - A(x) f(x)}{\epsilon}

=\lim_{\epsilon \rightarrow 0} \frac{ [A(x)+ \epsilon A&#039;(x)] [f(x)+ \epsilon f&#039;(x)] - A(x) f(x)}{\epsilon}

=A&#039;(x) f(x) + A(x) f&#039;(x)

If by any means you had A(z) with z independent of x, then the partial derivative wouldn't "see" that, and it would only act on f(x)
 
  • #35
ChrisVer said:
I don't really understand what you are struggling to say Fredrik.. sorry...
Why would you write for a function:
A(x) f(x) = h(y)
And not just say that:
A(x) f(x)= h(x)
?
(Just a reminder, both to myself and to you. What we're discussing here is the case where the ##A_i## are operator-valued functions with domain ##\mathbb R^3##, and whether it makes sense to say that we need to use the product rule because ##A_i(x)## depends on ##x##. My position is that it doesn't).

I'm not doing anything like A(x)f(x)=h(y). What I'm doing is to say that for all functions f and g, if f=g, then f and g have the same domain, and for all y in that domain, we have f(y)=g(y).

x is an element of some set S, such that for each ##x\in S##, we're dealing with two functions, I'll call them ##f_x## and ##g_x## here, to keep the notation simple. The x in the equality ##f_x=g_x## is a variable that represents some element of the set S. Regardless of what element that is, the equality means that ##f_x## and ##g_x## have the same domain, and for all ##y## in that domain, we have ##f_x(y)=g_x(y)##. ##y## is of course a dummy variable here. It can be replaced by any other symbol, except ##x##, which is already reserved to denote an element of S.

The x in the equality ##(\partial_i A_i(x))f=\partial_i(A_i(x) f)## only determines which functions we're dealing with. On the right-hand side, x is not the input to the function on which ##\partial_i## acts. It's a variable whose value determines which function the ##\partial_i## acts on. On the left-hand side, the value of x determines which operator ##\partial_i## is multiplied with.

ChrisVer said:
Maybe we are losing it in A(x)... but in my viewpoint, when A(\hat{x}) will act on \psi(x)
##\psi(x)## is just a number. I assume that you mean ##\psi##. ##\psi(x)## is an element of the range of ##\psi##. I think that if you want to sort this out, you will have to be very careful with the terminology, so that you never end up thinking of a number as a function. Also, be careful with "for all" statements.

I am extremely nitpicky with these things. I never refer to ##x^2## or ##f(x)## as a "function" for example. Some people think that's ridiculous, but it's really paying off in problems like this.

Regarding ##A(\hat x)##. I know two non-rigorous ways to deal with that, and they both yield the desired result. One way is covered in the other thread. The other is by the methods of Sakurai's book, as in Oxivillian's post. It's ##A(x)## (with x a real number or a triple of real numbers) that I think will give us a calculation that doesn't even look like the product rule.

Edit: Something to think about: How would you define the product of ##\partial_i## and ##A_i(x)##? Is it not through the usual ##(AB)f=A(Bf)##? That definition tells us that ##(\partial_i A(x))f=\partial_i(A_i(x)f)##. To think that the ##\partial_i## has anything to do with the symbol x here is to confuse the function ##A_i(x)f##, which takes an arbitrary y to ##(A_i(x)f)(y)##, with the function-valued function ##x\mapsto A_i(x)f##, or to first confuse f with f(x), and then interpret ##\partial_i(A_i(x)f(x))## (which is not what we're dealing with) as the ith partial derivative at x of the function ##y\mapsto A_i(y)f(y)##. (I'm using y here, because it's potentially confusing to use the already reserved symbol x as a dummy variable).
 
Last edited:
  • #36
Perhaps a summary is in order?

(1) A(x) is a c-number;

(2) A(\hat{x}) is an operator;

(3) In the x-representation, \langle x \lvert A(\hat{x}) \lvert x&#039;\rangle = A(x)\delta(x-x&#039;);

(4) Sometimes it is said that "in the x-representation, A(\hat{x}) is represented by A(x)", when what is really meant is (3);

(5) [irrelevant to this thread] In field theory, \hat{A}(x) is an operator, and this time x is simply a label.

Anyone agree/disagree?
 
  • Like
Likes 1 person
  • #37
I agree with that, in particular that what you're saying in (1) and (2) is appropriate, but we haven't been sticking to that in this thread. We have been using notations like A(x) for an operator, and we haven't always been concerned about what x really is.

The idea behind the notations in (1) and (2) is that we can take any nice enough function ##A:\mathbb R\to\mathbb C##, and then assign a meaning to ##A(\hat x)## by some method. I think that in the Sakurai-style argument, that method would be ##A(\hat x)=\int\mathrm dx\, A(x)|x\rangle\langle x|##. This definition ensures that
$$A(\hat x)|x\rangle =\int\mathrm dx' A(x')|x'\rangle\langle x'|x\rangle =\int dx'\, A(x')|x'\rangle \delta(x'-x) =A(x)|x\rangle.$$ Does that make sense? I feel weird about using the delta function like this when we're dealing with a "Hilbert space valued" function like ##x\mapsto A(x)|x\rangle##...and I'm using the term "Hilbert space" very loosely here, since the ##|x\rangle## aren't elements of a Hilbert space. I'm a bit concerned about the fact that with this definition of ##A(\hat x)##, we have
$$\langle x'|A(\hat x)|x\rangle =\int\mathrm dx''A(x'')\delta(x'-x'')\delta(x''-x),$$ and products of distributions are problematic. But this isn't supposed to be rigorous anyway, so maybe I should just stop worrying and treat the first delta as a function. Then we can continue ##=A(x)\delta(x'-x)##.
 
  • #38
Fredrik said:
We have been using notations like A(x) for an operator

I think that's the central problem in this thread. You can't possibly characterize an operator using A(x) because A(x) is the continuous version of a row vector, not of a matrix. You need instead to specify the matrix element \langle x \lvert \hat{A} \lvert x&#039; \rangle, which involves two parameters, x and x&#039;.

The idea behind the notations in (1) and (2) is that we can take any nice enough function ##A:\mathbb R\to\mathbb C##, and then assign a meaning to ##A(\hat x)## by some method. I think that in the Sakurai-style argument, that method would be ##A(\hat x)=\int\mathrm dx\, A(x)|x\rangle\langle x|##. This definition ensures that
$$A(\hat x)|x\rangle =\int\mathrm dx' A(x')|x'\rangle\langle x'|x\rangle =\int dx'\, A(x')|x'\rangle \delta(x'-x) =A(x)|x\rangle.$$ Does that make sense? I feel weird about using the delta function like this when we're dealing with a "Hilbert space valued" function like ##x\mapsto A(x)|x\rangle##...and I'm using the term "Hilbert space" very loosely here, since the ##|x\rangle## aren't elements of a Hilbert space. I'm a bit concerned about the fact that with this definition of ##A(\hat x)##, we have
$$\langle x'|A(\hat x)|x\rangle =\int\mathrm dx''A(x'')\delta(x'-x'')\delta(x''-x),$$ and products of distributions are problematic. But this isn't supposed to be rigorous anyway, so maybe I should just stop worrying and treat the first delta as a function.

I think the apparent lack of rigor in this thread is notational only. That is to say, we could if we wanted to replace all the delta "functions" and integrals with appropriate well-defined statements about distributions, and even mathematicians would be happy.

This is to be contrasted with the situation in quantum field theory, where it's still not clear whether certain statements physicists write down have a well-defined mathematical meaning :rolleyes:

Then we can continue ##=A(x)\delta(x'-x)##.

I vote for that :smile:
 
  • #39
Oxvillian said:
I think the apparent lack of rigor in this thread is notational only. That is to say, we could if we wanted to replace all the delta "functions" and integrals with appropriate well-defined statements about distributions, and even mathematicians would be happy.
Yes, I wouldn't be really surprised if it's possible to make sense of every step of the Sakurai-style calculation, and if it's not, then there's a similar calculation based on a fancy spectral theorem that yields the same result. A Sakurai-style calculation is almost certainly better (in the sense of being more similar to what a mathematician would do) than what I did in the other thread. Basically I just pretended that ##A(\hat x)## can be expressed as a power series and then used the commutation relations.

So one option for the OP (who I assume is not able to cover 800 pages of topology and functional analysis this week) is to take a look at Sakurai, and learn the way of doing calculations that's taught there. Perhaps one of the other QM books is better on this, but I just looked in Ballentine, and it looks like he doesn't explain this result at all.
 
  • #40
then I think the problem/misconception is that you see \psi(x) as a number, while I see it as a wavefunction or let's say generally a function of x.

Operators by definition act on some functions and give some other function as a result...So when the A(\hat{x}) will act on the function \psi(x) it's supposed to give some function... this function is supposed to be A(x) \psi(x), because exactly as you also pointed out \hat{x}\psi(x) = x \psi(x) and this can go on for any power of x.. The last function is only dependent on x...
As for changing everything to 3D nothing much changes, you just need to write all the time \vec{x} instead... Although it doesn't play a role, because even your wavefunction is \psi(\vec{x}) and thus you will have \nabla \vec{A}(\vec{x}) \psi(\vec{x}) ... I don't think that you have any anisotropy so I guess the result is the triple of what you found in 1D...
That's also why I don't understand the terminology of "operator acts on operator", although someone can use that thing (for example you can work the commutation relations like that) one should always keep in mind that something will be acted on later...
However I'll have a look at Sakurai...
 
Last edited:

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 32 ·
2
Replies
32
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
1
Views
2K
Replies
7
Views
3K
Replies
5
Views
2K
Replies
10
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K