# Coulomb potential as an operator

A. Neumaier
2019 Award
Yes -- that's where I was heading. But I didn't want to "deprive" Shyan of the enjoyment I experienced (some time ago) of discovering these things by working them out.
I think he'll still enjoy it with my shortcut guide rather than a long interactive guide. There are enough steps to fill in (and small pitfalls) to make it mathematically rigorous.

Actually, one can do ##f=1/r## in an direct way by finding the implicit algebraic equation it satisfies in terms of ##x##, and proceeding similar to the case of the inverse of a polynomial. This is enjoyable, too, and one doesn't need limits. This works for any algebraic function, but not for exponentials, etc..

blue_leaf77
Homework Helper
I have to wait for 11 months to find a satisfying answer to this problem of reciprocal operator.

Mark Harder
Gold Member
Is there any proof that doesn't use Taylor expansion of operator functions?
Just a suggestion. Look up the Stone-Weierstrass theorems. The simplest one works for real continuous functions over a real interval. But others work for complex functions, quaternions, etc. For the real case, "every continuous function defined on a closed interval[a, b] can be uniformly approximated as closely as desired by a polynomial function". Doesn't involve derivatives, so it's not Taylor. If you want exactness, I suppose the infinite sums involved have to converge.

A. Neumaier
2019 Award
Just a suggestion. Look up the Stone-Weierstrass theorems. The simplest one works for real continuous functions over a real interval. But others work for complex functions, quaternions, etc. For the real case, "every continuous function defined on a closed interval[a, b] can be uniformly approximated as closely as desired by a polynomial function". Doesn't involve derivatives, so it's not Taylor. If you want exactness, I suppose the infinite sums involved have to converge.
This argument needs amendment since ##x## has an unbounded spectrum, while Weierstrass only works for functions of bounded operators.

Gold Member
In more explicit terms: Once you know that ##[p,f(x)]=-i\hbar f'(x)## for some function ##f## of an operator vector ##x## with commuting components you get it for ##g(x):=f(x)^{-1}## in place of ##f(x)##, too. By linear combination if you start with ##f(x)=x^n## with a multiexponent ##n## (where it follows from the definition by induction) you get it first for all polynomials, then for their inverses, then for all rational functions. Then for limits of rational functions, for integrals, and Cauchy's integral theorem gives it for all functions analytic on the joint spectrum of ##x## (with exception of any singularities). This is what you need to handle the Coulomb potential.
Its easy to show it for rational functions. But I'm afraid I lack the knowledge for the rest. Can you provide any reference on this?

Actually, one can do f=1/rf=1/r in an direct way by finding the implicit algebraic equation it satisfies in terms of xx, and proceeding similar to the case of the inverse of a polynomial. This is enjoyable, too, and one doesn't need limits. This works for any algebraic function, but not for exponentials, etc..
I don't quite understand. What algebraic equation do you mean?

A. Neumaier
2019 Award
Can you provide any reference on this?
Not directly. Taking a limit in an operator equation (e.g., to go from polynomials to power series) needs an appropriate topology on operators; these are discussed in books on functional analysis; similarly for the use of Cauchy's theorem (which is complex analysis applied to operators). I recommend that you read the first volume (and the beginning of the second) of the Math Physics book series by Reed and Simon.
What algebraic equation do you mean?
##f(x)^2(x_1^2+x_2^2+x_3^2)=1##. Apply the commutator with ##p_i## on both sides using the product rule, and collect terms.

By the way, due to limitations of the current copying software here on PO, when you copy part of a post containing equations you often need (as in your previous post) to edit the formulas to make them come out correctly.

rubi
By the functional calculus, ##f(x)## is the operator that multiplies by ##f(x)##. You can then do the calculation directly:
##[p,f]\Psi = -i (f\Psi)^\prime + i f\Psi^\prime = -i f^\prime\Psi - i f \Psi^\prime + i f \Psi^\prime = -i f^\prime\Psi##

ShayanJ
bob012345
Gold Member
Ju
I want to calculate the commutator ##{\Large [p_i,\frac{x_j}{r}]}## but I have no idea how I should work with the operator ##{\Large\frac{x_j}{r} }##.
Is it ## x_j \frac 1 r ## or ## \frac 1 r x_j ##? Or these two are equal?
How can I calculate ##{\Large [p_i,\frac 1 r]}##?
Thanks
Just curious, what are you going to do with this?

A. Neumaier
2019 Award
By the functional calculus, ##f(x)## is the operator that multiplies by ##f(x)##. You can then do the calculation directly:
##[p,f]\Psi = -i (f\Psi)^\prime + i f\Psi^\prime = -i f^\prime\Psi - i f \Psi^\prime + i f \Psi^\prime = -i f^\prime\Psi##
nice and simple, thanks!

Gold Member
Ju

Just curious, what are you going to do with this?
I'm trying to calculate the commutator of the Hydrogen Hamiltonian with the LRL vector.

Gold Member
I've found a document that does some calculations related to LRL vector operator.
Somewhere in it, the author uses the equation below:
##{\Large \mathbf{L} \cdot \frac{\mathbf{x}}{r}=\frac{1}{r}(\mathbf{L} \cdot \mathbf{x})+\mathbf{x} \cdot (\mathbf{L} \frac{1}{r})+\frac{1}{r} (\mathbf{x} \cdot \mathbf{L}) }##
But I don't know how he got this. I can't prove it. This is really annoying me. Can anyone help?
Thanks

Last edited:
blue_leaf77
Homework Helper
Well you can imagine ##\mathbf{L}## as a differential operator (which it is in the position basis), and then place a state to right most. Writing in each component,
$$L_i\frac{x_i}{r}\psi$$
Now this one has the form of ##\hat{D}(fgh)## where ##D## is a differential operator and ##f## ,##g##, and ##h## are all functions of the variable with which ##D## differentiates. So, it's just the product rule of derivative.

vanhees71
Gold Member
2019 Award
Well, a pragmatic way to solve these riddles is to work in the position representation (aka wave mechanics). Then it's clear that the definition of a function of the position operator is simply defined via
$$\langle \vec{x}|f(\hat{\vec{x}}) \psi \rangle=\langle f^{\dagger}(\hat{\vec{x}}) x|\psi \rangle = \langle f^*(\vec{x}) \vec{x} \psi \rangle = f(\vec{x}) \langle \vec{x}|\psi \rangle.$$
Further you can prove that
$$\langle \vec{x}|\hat{p} \psi \rangle=-\mathrm{i} \vec{\nabla} \langle \vec{x}|\psi \rangle.$$
To prove this you only need to know that the momentum operator generates translations, i.e., you have
$$\exp(-\mathrm{i} \hat{p} \vec{\xi}) |\vec{x} \rangle=|\vec{x} + \vec{\xi} \rangle.$$
So from this you find
$$\langle \vec{x}+\vec{\xi}|\psi \rangle = \langle \vec{x}|\exp(+\mathrm{i} \vec{\xi} \cdot \vec{x})|\psi \rangle.$$
Taking the gradient wrt. ##\vec{\xi}## and then setting ##\vec{\xi}=0## gives
$$\vec{\nabla}_x \langle \vec{x}|\psi \rangle=\mathrm{i} \langle \vec{x}|\hat{p} \psi \rangle.$$
Now it's pretty easy to calculate all kinds of commutators etc. just using the position representation. It's also more convenient now to use this representation in terms of differential operators and products of position functions as just derived, i.e., you write
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle, \quad \hat{O} \psi(\vec{x})=\langle \vec{x}|\hat{O} \psi \rangle$$
where ##\hat{O}## is some operator-valued function of ##\hat{\vec{x}}## and ##\hat{\vec{p}}##. You must only be careful with operator ordering in products of ##\hat{\vec{x}}## and ##\hat{\vec{p}}##. In this notation we have without operator-ordering trouble
$$\hat{p} \psi(\vec{x})=-\mathrm{i} \vec{\nabla} \psi(\vec{x}), \quad f(\hat{\vec{x}}) \psi(\vec{x})=f(\vec{x}) \psi(\vec{x}).$$
There is also no commutator problem in the definition of orbital angular momentum since components of position and momentum wrt. a Cartesian coordinate system in different directions commute, i.e., you simply have
$$\hat{\vec{L}} \psi(\vec{x})=\hat{\vec{x}} \times \hat{\vec{p}} \psi(\vec{x})=-\mathrm{i} \vec{x} \times \vec{\nabla} \psi(\vec{x}).$$
The Hamiltonian for the Kepler problem thus reads
$$\hat{H}=-\frac{1}{2m} \Delta - \frac{Z e^2}{4 \pi r}.$$
Now you can prove all the formulae mentioned in this thread by using well-known rules of partial derivatives.

In the treatment of the Kepler problem using the Runge-Lenz vector as in the posted paper above, it's more convenient to stay in Cartesian coordinates and just use brute force calculus to derive all the necessary commutators and other formulae needed. That's just dull work, which you can as well do using a computer algebra system like Mathematica ;-)).

ShayanJ