What is the definition of integration on operators?

In summary: However, if we assume that commutation relations hold, then the derivative should also be 1 in the above equation for H(q_1,..q_n), no matter what the q_i are.strangerep, your reply is much more than I hoped for.Sorry for not mentioning that I am interested in the case where canonical commutation relations hold.
  • #1
qasdc
14
0
"Integration" on operators

Hi!
I am having some difficulty in finding a definition about some kind of reverse operation (integration) of a derivative with respect to an operator which may defined as follows.
Suppose we have a function of n, in general non commuting, operators [tex] H(q_1 ,..., q_n) [/tex] then differentiation with respect to one of them can be defined as,
[tex]
\begin{equation}
\frac{\partial H}{\partial q_i}=\lim_{\lambda \rightarrow 0}\frac{\partial H}{\partial \lambda}(q_1 ,...,q_i +\lambda,..., q_n)
\end{equation}
[/tex]
I have found the above definition in the paper, "Exponential Operators and Parameter Differentiation in Quantum Physics", R. M. Wilcox, J. Math. Phys. 8, 962 (1967) for which a book by Louisell, "Radiation and noise in Quantum Electronics" is cited. Unfortunately I do not have access to that book at the moment where there is a good chance I can find what I am looking for.

I would appreciate if someone could provide a definition, if there is one, and in addition point out some references for further studying.

Thank you in advance.
 
Last edited:
Physics news on Phys.org
  • #2


Let my rephrase my question to make things a bit simpler.

Suppose that you have an equation,
[tex]
\begin{equation}
\frac{\partial H}{\partial q_1}(q_1 ,..., q_n)=1
\end{equation}
[/tex]
where the derivative is defined according to my previous post.

Would that imply that,
[tex]
\begin{equation}
H(q_1 ,..., q_n)=q_1 + f(q_2 ,..., q_n)
\end{equation}
[/tex]
or not?
 
  • #3


Hello qasdc,

I think one has to be careful here. When it comes to operators, there can always be some problem hiding there. Consider function [itex]H(p,q) = q + pq - qp[/itex]. I'm not familiar with this kind of differentiation, but your definition seems to give

[tex]
\frac{\partial H}{\partial q} = 1 + p - p = 1.
[/tex]

But clearly [itex] pq -qp \neq f(p)[/itex].
 
  • #4


Jano L. said:
Hello qasdc,

I think one has to be careful here. When it comes to operators, there can always be some problem hiding there. Consider function [itex]H(p,q) = q + pq - qp[/itex]. I'm not familiar with this kind of differentiation, but your definition seems to give

[tex]
\frac{\partial H}{\partial q} = 1 + p - p = 1.
[/tex]

But clearly [itex] pq -qp \neq f(p)[/itex].

Jano L. , thank you for your answer.

However, I do not understand what your point is.
The definition of the above derivative is very clear and the result you got for [itex]H(p,q) = q + pq - qp[/itex] is the correct one since,


[tex]
\begin{equation}
H(p,q) = q + pq - qp = q - [q,p] = q - i \hbar
\end{equation}
[/tex]
from where it is obvious that only the first term can survive, giving [tex]\frac{\partial H}{\partial q} = 1[/tex]

However, I now believe that the "antiderivative" that I seek is not at all trivial to define, which also explains the lack of answers...
 
  • #5


qasdc said:
However, I now believe that the "antiderivative" that I seek is not at all trivial to define, which also explains the lack of answers...

A lot depends on the details of the commutation relations among the various [itex]q_i[/itex] operators. In your initial post, you said they're noncommuting but gave no other details, so it's not clear whether one can assume canonical commutation relations (which would simplify things a lot). Can you post more detail on that?

There are general formulas which might be relevant somehow. Here's one...

Let f,g be noncommuting quantities and let F(f,g) be any smooth function of f,g. Then
[tex]
\def\eps{\epsilon}
\Big[ f \,,\, F(f,g) \Big] ~=~ \lim_{\eps\to 0} \frac{F(f, g + \eps [f,g]) ~-~ F(f,g)}{\eps}.
[/tex]
(where I might have missed a factor of i, or something).

Looking at the proof of the above (which proceeds by induction on the number of operations in F), it might be possible to prove your desired result by similar means. If you post a bit more detail on your [itex]q_i[/itex], I might give it a go...

Supposing however, that one can give a sensible meaning to the differentiation-by-operator, I don't see how you could do much better for an "antiderivative" than something that works like this:
[tex]
\frac{\partial H}{\partial q_i} ~=~ 1 ~~~\Rightarrow~~~
H ~=~ q_i + X
[/tex]
where X is an arbitrary quantity such that
[tex]
\frac{\partial X}{\partial q_i} ~=~ 0
[/tex]
I.e., take an ansatz for H as a general analytic expression and "differentiate" term by term to see what you get. Depending on the details of your Lie algebra, you might get lucky.
 
Last edited:
  • #6


qasdc,
my point was that in general, for any two finite matrices, [itex]AB-BA[/itex] is a function of both [itex]A[/itex] and [itex]B[/itex], which is not a multiple of unit matrix. The case where [itex]AB-BA = i\hbar[/itex] is a very special one. For finite matrices, such case is even impossible.
 
Last edited:
  • #7


Thank you both for your replies.
strangerep, your reply is much more than I hoped for.

Sorry for not mentioning that I am interested in the case where canonical commutation relations hold.
So let me sort things out.
First of all, the definition of the derivative that I provided above, should hold whatever the commutation relations of the [itex]q_i[/itex] are, as soon as someone keeps in mind that he is dealing with noncommuting operators.
So, Jano L., that definition will give a result of 1 for the [itex]H(q,p)[/itex] that you provided even for the general case where [itex][q,p]=c[/itex], where [itex]c[/itex] some operator. In that sense you could say that the partial derivative of [itex]c[/itex] wrt [itex]q[/itex] equals 0, for that kind of derivative. So, this is consistent.

However, I am most interested in the very specific case in which,
[tex]
H:=H(q,p)
[/tex]
where [itex]q,p[/itex] satisfy the usual canonical commutation relations.

So, in that case, what would a formal definition of an antiderivative will look like?

Now, let me give an example of why, I believe, it is not at all trivial to define some operation of that kind.
Let,
[tex]
H(q,p)=qpq
[/tex]
and
[tex]
F(q,p)=F(q)=q^2
[/tex]

It is obvious that [itex]H(q,p)[/itex] satisfies the condition to be an integral of [itex]F(q)[/itex] but so do,
[tex]
G_1(q,p)=q^2 p
[/tex]
[tex]
G_2(q,p)=pq^2
[/tex]
and generally any,
[tex]
I(q,p)=qpq+f(q)
[/tex]
In that sense, it seems that [itex]I(q,p)[/itex] is the most general form of the "integral" but my problem is how can one define that "integration" operation formally so that we arrive naturally at this result.

Finally, strangerep I would appreciate if you could provide me with any related bibliography that you are aware of?
 
  • #8


qasdc said:
[...] I am interested in the case where canonical commutation relations hold.
Oh.
That makes everything much easier, since [itex][q,p][/itex] commutes with both p and q.

If [itex][q,p] = 1[/itex], then it can be shown (reasonably easily by induction) that:
[tex]
\def\Pdrv#1#2{\frac{\partial #1}{\partial #2}}
[q, F(p)] ~=~ F'(p) ~;~~~~~ [G(q),p] ~=~ G'(q)
[/tex]
So when you have these sorts of "derivative by operators", you can translate it back to commutators and sometimes that helps to get your thinking straight.

Consider also that the commutator is a "derivation" (meaning it satisfies the Leibniz product rule), just as a ordinary derivative does.

Also note that in this formalism, q and p are to be treated like independent "variables", i.e.,
[tex]
\Pdrv{p}{q} = 0 = \Pdrv{q}{p}
[/tex]

Slightly more generally, if f,g are noncommuting quantities such that [itex][f,g][/itex] commutes with both f and g, then
[tex]
[f, F(f,g)] ~=~ \Pdrv{F}{g} [f,g] ~=~ [f,g] \Pdrv{F}{g}
[/tex]
(The proof is a corollary of the other result I posted earlier.)

[tex]
H(q,p)=qpq
[/tex]
and
[tex]
F(q,p)=F(q)=q^2
[/tex]
It is obvious that [itex]H(q,p)[/itex] satisfies the condition to be an integral of [itex]F(q)[/itex] but so do,
[tex]
G_1(q,p)=q^2 p
[/tex]
[tex]
G_2(q,p)=pq^2
[/tex]
and generally any,
[tex]
I(q,p)=qpq+f(q)
[/tex]
In that sense, it seems that [itex]I(q,p)[/itex] is the most general form of the "integral" but my problem is how can one define that "integration" operation formally so that we arrive naturally at this result.
It may help to always rearrange each term in your initial expression into a "standard" order, e.g., with all q's standing in front of p's (or vice versa -- as long as you're consistent).
So instead of working with the H you wrote above, rewrite it as
[tex]
H = qpq = q(qp-1) = q^2p - q
[/tex]
Then your I(q,p) idea above should get you close to the answer. I.e., work out the integral as if for independent commuting variables, but use a function of the conjugate variable as the integration constant, and maintain your "standard" ordering convention carefully.

Also, have a think about how the Leibniz rule would work if you were going in the other direction. (I.e., write it out in terms of commutators, and then again in terms of formal derivatives.)

Finally, strangerep I would appreciate if you could provide me with any related bibliography that you are aware of?
The thing about commutators involving functions of canonically conjugate variables is quite common in QM textbooks -- but I can't recommend any particular one since I've always just worked out that stuff by hand when needed. The more sophisticated formulas I mentioned were taught to me by Arnold Neumaier via unfinished unpublished papers, so I'm not at liberty to say much more than I have already. But I'm not sure you really need that stuff anyway if canonical commutations relations are sufficient.

Do you have a more specific Hamiltonian in mind? What is the underlying problem or application?
 
Last edited:
  • #9


Sorry for my late reply and thank you for another very informative post.

I have been aware of most of what you mention in your last post, except for the formula:
[tex]
[f, F(f,g)] ~=~ \frac{\partial F}{\partial g} [f,g] ~=~ [f,g] \frac{\partial F}{\partial g}
[/tex]
Normal ordering, for example, is treated in the book of Luisell that I mentioned on my first post. Using that, one could then define an anti-derivative, at least for functions that can be expanded in a series in [itex]q, p[/itex]. But one might not want to restrict himself that much...

I do not think there is any point in getting in any more details about this, we would probably exceed the purpose of this forum. If, however, I find anything interesting regarding it or if I feel I need someone to further discuss about these, I will certainly contact you by a pm.

I made this thread because I wanted to ensure that this "integral" definition is not something trivial that I just happened to be ignoring... But as it turned out this is not the case.

PS: By the way, by mentioning Arnold Neumaier, I came across this book: "Classical[/PLAIN] [Broken] and quantum mechanics
via Lie algebras
", which seems interesting and I might have a look at it.
 
Last edited by a moderator:
  • #10


qasdc said:
[...] one could then define an anti-derivative, at least for functions that can be expanded in a series in [itex]q, p[/itex]. But one might not want to restrict himself that much...
The formulas I've mentioned are not necessarily restricted to functions that can be expanded in a series. With care, they can be applied to quotients, continued fractions, and even general analytic functions with poles.

I do not think there is any point in getting in any more details about this [...]
OK.
 

What is integration on operators?

Integration on operators is a mathematical process that involves combining two or more operators to form a single operator. It is commonly used in calculus and other branches of mathematics to simplify complex equations and solve problems.

Why is integration on operators important?

Integration on operators is important because it allows us to break down complex equations into simpler ones, making it easier to solve problems and understand the underlying mathematical concepts. It also has many practical applications in fields such as physics, engineering, and economics.

What is the difference between integration on operators and integration on functions?

The main difference between integration on operators and integration on functions is that integration on operators involves combining operators, while integration on functions involves finding the anti-derivative of a single function. In other words, integration on operators deals with the manipulation of operators, while integration on functions deals with the manipulation of functions.

What are some common techniques used in integration on operators?

Some common techniques used in integration on operators include the product rule, quotient rule, chain rule, power rule, and trigonometric identities. These techniques allow us to simplify complex operators and make them easier to integrate.

How is integration on operators used in real-world applications?

Integration on operators has many real-world applications, such as in physics to calculate work and energy, in economics to determine optimal production levels, and in engineering to model and analyze systems. It is also used in computer science to optimize algorithms and in statistics to analyze data.

Similar threads

  • Quantum Physics
Replies
2
Views
640
Replies
7
Views
459
Replies
14
Views
1K
  • Quantum Physics
Replies
31
Views
2K
Replies
27
Views
2K
Replies
0
Views
446
Replies
3
Views
775
  • Quantum Physics
Replies
3
Views
681
  • Quantum Physics
Replies
15
Views
2K
  • Quantum Physics
Replies
5
Views
798
Back
Top