Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Preservation of Poisson Bracket Structure upon quantization?

  1. Nov 16, 2013 #1
    When (canonically) quantizing a classical system we promote the Poisson brackets to (anti-)commutators. Now I was wondering how much of Poisson bracket structure is preserved. For example for a classical (continuous) system we have
    $$ \lbrace \phi(z), f(\Pi(y)) \rbrace = \frac{\delta f(\Pi(y))}{\delta \Pi(z)}, $$
    where the derivative is the functional derivative and the bracket denotes the classical poisson bracket for fields
    $$ \lbrace F, G \rbrace := \int \text{d} x \left[\frac{\delta F}{\delta \phi(x)} \frac{\delta G}{\delta \Pi(x)} - \frac{\delta G}{\delta \phi(x)} \frac{\delta F}{\delta \Pi(x)}\right]. $$
    Does this mean that when we quantize using the rule
    $$ \lbrace \phi, \Pi \rbrace \rightarrow -i \left[ \phi, \Pi \right]_{\pm}, $$
    where + stands for commutator (bosons) and - stands for anti-commutator (fermions), we automatically obtain
    $$ \left[ \phi(x), f(\Pi(y)) \right]_{\pm} = i\frac{\delta f(\Pi(y))}{\delta \Pi(z)},$$
    and similar formulae for other structures of the Classical Poisson bracket, (say Hamilton equations of motion)? I was wondering this because I want to compute
    $$ \left[ H_{12}, \frac{1}{E-H_{22}}\right], $$
    where $$H_{12}$$ and $$H_{22}$$ are functions of several different (second-quantized) creation and annihilation operators. I was able to check the preservation of this particular rule for the first quantized situation
    $$[x,p]= i,$$
    here it is easy to check that this leads to
    $$[x,f(p)] = i \partial_{p} f(p),$$
    either by using a test function and the regular coordinate space representation of the operators x and p, or simply by plugging in a monomial function for f(p) and using the rules for commutation. I am not sure how to prove a generalized version of this though.
    Any help would be appreciated.
  2. jcsd
  3. Nov 16, 2013 #2
    It absolutely does mean that! The Poisson Bracket and the commutator are both examples of an algebraic structure called a derivation, which basically means they act like derivatives. They're obviously linear, which gets you most of the way there, and additionally they obey the product rule: [itex][a, bc] = b[a,c] + [a, b]c[/itex]. With those relations in hand, you can use induction to show that [itex][x,p]=i[/itex] implies that [itex][x, f(x, p)] = i \frac{\partial f}{\partial p}[/itex], and [itex][p, f(x,p)] = -i \frac{\partial f}{\partial x}[/itex]. This fact also allows you to derive Hamilton's equations of motion, as well as the Heisenberg operator equation, and pretty much any other quantum mechanics formula that has an [itex]i[/itex] in it, by looking at the analogous PB equation in classical mechanics.
  4. Nov 16, 2013 #3
    Thanks! So now imagine the case where I second quantize my system, and I'm talking about fermions, that is I have the anti-commutator
    $$ \left[a^{\dagger}_{i},a_{j}\right]_{-} = \delta_{ij}, $$
    for the field creation operators in momentum space. And say I have some other operators say b and c, that obey the same rules, then I can say
    $$ \left[a^{\dagger}_{i}, f(a^{\dagger}_{j},a_{j},b_{k},c_{l})\right]_{-} = \frac{\partial f}{\partial a_{i}}. $$
    I somehow have an intuition that this should be true, but I am unsure how to prove (or disprove) this. Any hints or references in the right direction would be appreciated :)
  5. Nov 16, 2013 #4


    User Avatar
    Science Advisor

    What Chopin told you applies to commutators.

    One must be more careful with anticommutators. To see this, try the following example manually:
    \left[a^\dagger_i \,,\, a_i a_j \right]_-
  6. Nov 16, 2013 #5
    There may be a fancier way to do it, but the one I know is by induction. Say we have two operators [itex]p[/itex] and [itex]q[/itex], with [itex][q, p] = i[/itex] (note that strangerep is right, this only works for commutators. I expect there's got to be some kind of analogous thing you can do with anticommutators, but I'm not sure exactly what that might be.)

    Focus first on a function [itex]P_n(p, q)[/itex] that contains a single term, which is a product of [itex]n[/itex] of these operators. We want to show that [itex][q, P_n(p, q)] = i \frac{\partial}{\partial p} P_n(p, q)[/itex]. The base case is terms of length 1:
    [tex][q, q] = 0 = i \frac{\partial}{\partial p} q\\
    [q, p] = i = i \frac{\partial}{\partial p} p[/tex]
    So [itex][q, P_1(p, q)] = i \frac{\partial}{\partial p} P_1(p, q)[/itex], where [itex]P_1(p, q)[/itex] is all terms of length 1. Now for the induction step, we handle terms of length [itex]n+1[/itex], so we have (letting [itex]x[/itex] denote either [itex]p[/itex] or [itex]q[/itex]):

    [tex][q, xP_n(p, q)] = x[q, P_n(p, q)] + [q, x]P_n(p, q) = i x (\frac{\partial}{\partial p} P_n(p, q)) + i (\frac{\partial}{\partial p} x)P_n(p, q) = i \frac{\partial}{\partial p}(xP_n(p, q))[/tex]

    Therefore [itex][q, P_{n+1}(p, q)] = i \frac{\partial}{\partial p} P_{n+1}(p, q)[/itex] for all terms [itex]P_{n+1}(p, q)[/itex] of length [itex]n+1[/itex], and the induction is complete. Since the commutator is also linear, what is true for one polynomial term is true for any sum of polynomial terms. That means the theorem is true for any polynomial, as well as any other non-polynomial function which is equal to its Taylor expansion. The same process can be used to show [itex][p, P(p, q)] = -i\frac{\partial}{\partial q} P(p, q)[/itex].

    Note that since the PB is also a derivation (as defined above), this same proof works for it, so showing that the commutator of [itex]p[/itex] and [itex]q[/itex] is equal to their PB is sufficient to show that any functions of [itex]p[/itex] and [itex]q[/itex] are as well. You can also use the same proof to show the Heisenberg operator equation--if [itex]\frac{\partial}{\partial q}H = -\dot{p}[/itex] and [itex]\frac{\partial}{\partial p}H = \dot{q}[/itex], then that means that [itex][p, H] = i \dot{p}[/itex], and [itex][q, H] = i \dot{q}[/itex], and the same induction trick then proves that [itex][H, F(p, q)] = i\dot{F}(p, q)[/itex] for any function [itex]F[/itex] of [itex]p[/itex] and [itex]q[/itex].
    Last edited: Nov 16, 2013
  7. Nov 16, 2013 #6


    User Avatar
    Science Advisor

    For any given fermionic mode ##a_i## we have:
    $$a_i^2 ~=~ 0 ~=~ (a^\dagger_i)^2 ~.$$
    Hence the possible functions of a set of fermionic modes is rather more restricted. Evaluation of the anticommutator is then (mostly) an exercise in counting the minus signs. :biggrin:
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook