Quantizing a two-dimensional Fermion Oscillator

In summary, the problem with quantizing a two dimensional system defined by the Lagrange's function is that the equations of motion are non-trivial and the velocities are not independent of the coordinates. The attempt with the Schrödinger's equation and Hamiltonian fails because the canonical momenta are y and x. The path integrals and action cannot be used to define time evolution because there are no terms in the S that are proportional to \frac{1}{t_1-t_0}.
  • #36
[tex]
p_x = y
[/tex]
[tex]
p_y = -x
[/tex]

[tex]
H=x^2 + y^2
[/tex]

So the Wikipedia's page, http://en.wikipedia.org/wiki/Dirac_bracket, explains, that I should take a new Hamilton's function

[tex]
H=x^2 + y^2 + u(p_x - y) + v(p_y + x)
[/tex]

with some arbitrary smooth functions [itex]u(x,y,p_x,p_y)[/itex] and [itex]v(x,y,p_x,p_y)[/itex]. The equations of motion now become

[tex]
\dot{x} = \frac{\partial H}{\partial p_x} = u
[/tex]
[tex]
\dot{y} = \frac{\partial H}{\partial p_y} = v
[/tex]
[tex]
\dot{p}_x = -\frac{\partial H}{\partial x} = -2x - v
[/tex]
[tex]
\dot{p}_y = -\frac{\partial H}{\partial y} = -2y + u
[/tex]

The functions u and v can be eliminated, and we get

[tex]
\dot{p}_x = -2x - \dot{y}
[/tex]
[tex]
\dot{p}_y = -2y + \dot{x}
[/tex]

Finally, by substituting [itex]\dot{p}_x-\dot{y}=0[/itex] and [itex]\dot{p}_y+\dot{x}=0[/itex], we get the same equations of motion

[tex]
\dot{x} = y
[/tex]
[tex]
\dot{y} = -x
[/tex]

that were also implied by the original Lagrange's function. All this seems to make sense, but I have difficulty understanding how the quantization happens. Where is the Dirac's bracket coming from? Why not quantize the system by writing down the Schrödinger's equation

[tex]
i\hbar\partial_t \Psi = \big(x^2 + y^2 - uy + vx - i\hbar(u\partial_x + v\partial_y)\big)\Psi?
[/tex]
 
Physics news on Phys.org
  • #37
I also tried to quantize this by regularizing the system like this

[tex]
L = \epsilon(\dot{x}^2 + \dot{y}^2) + \dot{x}y - x\dot{y} - x^2 - y^2
[/tex]

The canonical momenta are

[tex]
p_x = 2\epsilon\dot{x} + y
[/tex]
[tex]
p_y = 2\epsilon\dot{y} - x
[/tex]

and the system can be represented with a Hamilton's function

[tex]
H=\frac{1}{4\epsilon}(p_x^2 \;+\; p_y^2) \;+\; \frac{1}{2\epsilon}(xp_y \;-\; yp_x) \;+\; \big(1\;+\;\frac{1}{4\epsilon}\big)(x^2 \;+\; y^2)
[/tex]

as usual. If one goes through the labor of finding solutions to this, it becomes evident that all solutions do not converge at the limit [itex]\epsilon\to 0[/itex]. However, the energies of these solutions diverge too. This means, that this system approximates the original system in the following sense: No matter how small the epsilon is, there always exists some high energy solutions that are far different from the solutions of the original system. On the other hand, if we restrict the attention to low energy solutions only, then by setting epsilon sufficiently small, the solutions become approximately solutions of the original system.

In polar coordinates the Schrödinger's equation becomes

[tex]
i\partial_t\Psi(t,r,\theta) = \Big(-\frac{1}{4\epsilon}\big(\partial_r^2 \;+\; \frac{1}{r}\partial_r \;+\; \frac{1}{r^2}\partial_{\theta}^2\big) \;-\; \frac{i}{2\epsilon}\partial_{\theta} \;+\; \big(1 \;+\; \frac{1}{4\epsilon}\big)r^2\Big)\Psi(t,r,\theta)
[/tex]

I think I've succeeded in solving the energy eigenstates of this. Firstly

[tex]
\psi(r,\theta) = \exp\big(-\frac{1}{2}\sqrt{1+4\epsilon} r^2\big)
[/tex]

is a solution. In analogy with the harmonic oscillator, it makes sense to next attempt solutions of form

[tex]
\psi(r,\theta) = H(r,\theta) \exp\big(-\frac{1}{2}\sqrt{1+4\epsilon} r^2\big),
[/tex]

and substitute the attempt

[tex]
H(r,\theta) = \sum_{k_1=0}^{\infty} \sum_{k_2=-\infty}^{\infty} a_{k_1,k_2} r^{k_1} e^{ik_2\theta}.
[/tex]

I don't want to go into details now, but it turns out that one can get needed recursion relations for the coefficients [itex]a_{k_1,k_2}[/itex]. The energy spectrum becomes

[tex]
E_{n,k} = \frac{\sqrt{1+4\epsilon}(|k| + 2n + 1) + k}{2\epsilon},\quad n\in\{0,1,2,\ldots\},\;k\in\mathbb{Z}
[/tex]

What I found somewhat surprising, is that all energies diverge towards infinity when one sets [itex]\epsilon\to 0[/itex], but in fact it is possible to make sense out of this intuitively. I thought about it like this: Classically, with epsilon set to zero, all finite energy solutions are forced on the circular paths. In some sense, if particle was forced out of the circular path, then its energy would go to infinity. Then suppose we have a quantum mechanical wave packet on the circular path. The wave packet cannot go sufficiently accurately on the circular path, and always has some amplitudes for being outside the circular path. Hence the infinite energies.

But why did I still not encounter anything the resembles fermions? Nothing is anti-commuting here. Is this simply a wrong way to attempt to quantize the system? If you consider the strong magnetic field approximation, the IMO, this would seem very well justified way to quantize the system.
 
Last edited:
  • #38
jostpuur said:
The equations of motion now become
[tex]
\dot{x} = \frac{\partial H}{\partial p_x} = u
[/tex]
[tex]
\dot{y} = \frac{\partial H}{\partial p_y} = v
[/tex]
[tex]
\dot{p}_x = -\frac{\partial H}{\partial x} = -2x - v
[/tex]
[tex]
\dot{p}_y = -\frac{\partial H}{\partial y} = -2y + u
[/tex]
I don't think you're allowed to write down those eqns of motion in a
singular (constrained) system. The Wiki page you quoted says that
the eqn of motion is
[tex]
\Big(\partial_q H + \dot p \Big)\delta q + \Big(\partial_p H - \dot q \Big)\delta p ~\approx~ 0
[/tex]
where "[itex]\approx[/itex]" means "weak equality". You're not allowed to set the
[itex]\delta q,~\delta p[/itex] to zero separately (to get the usual eqns of motions)
because the variations are restricted by a constraint.

Where is the Dirac's bracket coming from?
If you carry through the constrained quantization procedure further, and more
carefully, (as explained on the Wiki page), I think you'll find that the usual
Poisson bracket is modified (in general) by the presence of constraints.

Why not quantize the system by writing down the Schrödinger's equation
[tex]
i\hbar\partial_t \Psi = \big(x^2 + y^2 - uy + vx - i\hbar(u\partial_x + v\partial_y)\big)\Psi?
[/tex]
You haven't yet constructed a representation of the various classical observables
(functions on phase space) as unitary operators on a Hilbert space.

For fermions, you need to start from a classical phase space based on
Grassmann variables.

P.S. Dirac's little booklet "Lectures on Quantum Mechanics" explains this stuff
far more pedagogically than the Wiki page. I picked up a copy from Amazon
quite cheaply.
 
Last edited:
  • #39
strangerep said:
For fermions, you need to start from a classical phase space based on
Grassmann variables.

I keep hearing this all the time, but I'm not convinced. I have two remarks.

My post #31: The anti-commutation of some operators does not imply anti-commutation of the classical variables.

Rainbow Child's post #11: He explains, that the we should be getting anti-commuting operators, somehow with the Dirac's brackets, even though he did not mention classical Grassmann variables anywhere.

It is difficult for me to tell if this classical Grassmann variable thing is a myth or fact.

P.S. Dirac's little booklet "Lectures on Quantum Mechanics" explains this stuff
far more pedagogically than the Wiki page. I picked up a copy from Amazon
quite cheaply.

I hope you are talking about this

https://www.amazon.com/dp/0486417131/?tag=pfamazon01-20

It should be coming towards me in the post soon.
 
  • #40
In post #31:
jostpuur said:
If I have an operator A that corresponds to some physical quantity, then we usually interpret the expectation value of this as the corresponding classical quantity.
[tex]
A_{cl} = \langle \Psi|A|\Psi\rangle
[/tex]

Now suppose there is another quantity, and an operator B for it, so that AB+BA=0.
What is the product of these classical quantities? I would say it's this:

[tex]
A_{cl}B_{cl} = \langle \Psi|A|\Psi\rangle \langle \Psi|B|\Psi\rangle
[/tex]
For fermionic A,B the above can't be right.
Consider the case B=A: It could well be the case that [itex]\langle A\rangle \ne 0[/itex],
but we'll always find [itex]\langle A^2\rangle = 0[/itex] .


Rainbow Child's post #11: He explains, that the we should be getting anti-commuting operators, somehow with the Dirac's brackets, even though he did not mention classical Grassmann variables anywhere.
In post #11, I didn't see anything about anti-commuting operators. The Dirac brackets
that R.C. mentioned are defined on the Wiki page quoted earlier. They are generalizations
of Poisson brackets. No anti-commutation is involved, since it's dealing with
Poisson brackets of real/complex-valued functions on phase space.

It's crucial to distinguish the notions of (1) constraints and Dirac brackets, and (2)
fermionic anti-commutators. The Wiki page mostly deals with constrained classical
(and bosonic) systems. For example, the [itex]M_{ab}[/itex] mentioned therein is
composed from the 2nd-class constraints (i.e: Poisson brackets of constraints that
don't commute with other constraints). For classical and bosonic systems, there's
always an even number of these, hence a square matrix makes sense. Normally,
a PB of something with itself is zero, but for a Grassmann-valued field F it's
possible that a Poisson bracket of F with itself won't vanish. Hence the method
must be adjusted accordingly (but the Wiki page doesn't elaborate on this).

The Wiki page mentions that constraints are always "applicable" to fermions
because the Lagrangian is linear in the velocities for fermions (think of the
Lagrangian for a free Dirac electron).

It is difficult for me to tell if this classical Grassmann variable thing is a myth or fact.
I'm not sure what you mean by "myth" and "fact" here. It is certainly a fact that
path integral methods in QFT for fermions use Grassmann variables. Whether this
is just an ad-hoc mathematical artifice, or indicative of some deeper physical truth,
depends on one's philosophy.


Yes. Note that it talks mainly about constraints, and the techniques to deal
with them. Not much about about fermions specifically.

If you have a copy of Peskin & Schroeder, you can find a little bit about Grassmann
fields and the associated calculus in section 9.5. Also eq(9.75) for Grassmann
derivatives, but you'll probably need to find another source with a more extensive
treatment of the latter. You'll need Grassmann derivatives to see how classical
Poisson brackets become anti-commuting in the case of Grassmann fields.
The Wiki page alludes to this, but only very briefly. A (much) more advanced
treatment is in Henneaux & Teitelboim
 
  • #41
I should probably wait until I get the Dirac's lectures. This is getting a little bit speculative from me now... but you know, it's so difficult to stop thinking!

What happens, if you do not assign any Grassmann properties to the classical field variables, but still quantize the field by starting from the Dirac field Lagrangian. Is the canonical quantization still going to give anti-commuting field operators?

My problem is this, at the moment: As long as I don't know what happens with canonical quantization, starting with the commuting classical fields, and with the Dirac's Lagrangian, I keep hoping that this will give the correct anti-commuting quantum field.

I just made an interesting remark. The Peskin & Schroeder don't mention a thing about the Grassmann variables when they first talk about the Dirac's field, but later, with path integral quantization, they introduce the Grassmann variables. Could it be, that the Grassmann variables are intended to be used precisely with the path integral quantization, and not with the canonical operator quantization?
 
Last edited:
  • #42
It could be I got the regularization attempt to its end. The eigenstates and -energies of the quantized system, described by the Lagrangian

[tex]
L=\dot{x}y-x\dot{y}-x^2-y^2
[/tex]

should be

[tex]
\psi_n(r,\theta) = r^n e^{-\frac{1}{2}r^2-in\theta},\quad E_n=E_{\textrm{zero}} + n,\quad n\in\{0,1,2,3,\ldots\},
[/tex]

assuming I did everything right. These wave functions and energies were obtained by taking limit [itex]\epsilon\to 0[/itex] of the solutions of the regularized system. The zero point energy diverged towards infinity, but some of the energy differences remained finite. I'm not sure what this all means. Or is this nonsense? Some math it is, at least.
 
  • #43
Like Rainbow Child said, this particular problem runs into difficulties due to its classical structure. The non-regularity is a fairly serious problem. Another view is to remind yourself that quantisation is not a procedure. It is at best a heuristic for solving the inverse to taking the classical limit. Various people here have suggested ways to "quantise" the system, but not all of them will contain the relevant physics that you are looking for, even if they all formally reduce down to the same equations of motion.
 
  • #44
jostpuur said:
What happens, if you do not assign any Grassmann properties to the classical field variables, but still quantize the field by starting from the Dirac field Lagrangian. Is the canonical quantization still going to give anti-commuting field operators?
I'll work it out in detail later.

Peskin & Schroeder don't mention a thing about the Grassmann variables when they first talk about the Dirac's field, but later, with path integral quantization, they introduce the Grassmann variables. Could it be, that the Grassmann variables are intended to be used precisely with the path integral quantization, and not with the canonical operator quantization?
Look at the last paragraphs on p56, before eq(3.96). P&S impose anti-commutation
relations arbitrarily (after going through the usual arguments about how commutators
don't work for fermions). Then again on p58 near eqs(3.101, 3.102). They are
imposing anti-commutation arbitrarily.

So either way, be it with canonical quantization or path integrals, they're
putting in anti-commutation by hand.
 
Last edited:
  • #45
Jostpuur,

Back in your posts #36 and #37, I don't think you were
applying the Dirac-Bergman constraint quantization method
incorrectly. Here's my attempt at it...

Starting from the Lagrangian
[tex]
L ~=~ \dot x y - z\dot y - x^2 - y^2 ~,
[/tex]
the canonical momenta are
[tex]
p_x ~=~ \frac{\partial L}{\partial \dot x} ~=~ y ~;~~~~
p_y ~=~ \frac{\partial L}{\partial \dot y} ~=~ -x
[/tex]
and the "standard" Hamiltonian is
[tex]
H ~=~ p_x \dot x + p_y \dot y - L ~=~ x^2 + y^2 ~=:~ V(x,y)
[/tex]
Following the Dirac-Bergman method, we have the constraint functions:
[tex]
\phi_1 := p_x - y ~,~~~~ \phi_2 := p_y + x
[/tex]
The standard Poisson bracket is defined by
[tex]
\{F,G\} ~=~ \sum_i \left(
\frac{\partial F}{\partial q_i} \frac{\partial G}{\partial p_i}
- \frac{\partial F}{\partial p_i} \frac{\partial G}{\partial q_i}
\right) ~~~~~~(PB)
[/tex]
So the only non-vanishing Poisson bracket between the constraint
functions is
[tex]
\{\phi_1, \phi_2\}
~=~ \partial_y \phi_1 \partial_{p_y} \phi_2
- \partial_{p_x} \phi_1 \partial_x \phi_2
~=~ -2 ~.
[/tex]
Therefore, [itex]\phi_1[/itex] and [itex]\phi_2[/itex] are
"2nd-class constraints".

To get the Dirac bracket, we need the matrix
[itex]M_{ab} := \{\phi_a, \phi_b\}[/itex] and its inverse...
which here is
[tex]
[M_{ab}] ~:=~
\left(\begin{array}{cc} 0 & -2 \\ 2 & 0 \end{array}\right)
~;~~~~~~
[M_{ab}]^{-1} ~:=~
\frac{1}{2}\left(\begin{array}{cc} 0&1\\ -1&0 \end{array}\right)
[/tex]
and the standard Dirac bracket is then given by
[tex]
\{F,G\}_{DB} ~=~ \{F,G\}
~-~ \sum_{ab} \{F,\phi_a\}~M^{-1}_{ab}~\{\phi_b,G\}
~~~~~~(DB1)
[/tex]
or in our case,
[tex]
\{F,G\}_{DB} ~=~ \{F,G\}
~-~ \frac{1}{2} ~ \left(\{F,\phi_1\},~ \{F,\phi_2\}\right)
\left(\begin{array}{cc} 0&1\\ -1&0 \end{array}\right)
{\{\phi_1,G\} \choose \{\phi_2,G\}}
~~~~~~(DB2)
[/tex]
Writing out some ordinary Poisson brackets between
[itex]x,y,p_x,p_y[/itex] and the constraints, we find only the
following are non-zero:
[tex]
\{x,\phi_1\} = \{y,\phi_2\} = 1 ~;~~~~
\{p_x,\phi_2\} = -1 ~;~~~~ \{p_y,\phi_1\} = 1 ~.
[/tex]
Poisson brackets between [itex]x,y,p_x,p_y[/itex] are
[tex]
\{x,y\} = \{p_x,p_y\} = \{y,p_x\} = \{x,p_y\} = 0 ~;~~~~
\{x,p_x\} = \{y,p_y\} = 1 ~;~~~~
[/tex]

Now we can compute the Dirac brackets. I find that only the following
ones are non-zero:
[tex]
\{x,y\}_{DB} = \{x,p_x\}_{DB}
= \{y,p_y\}_{DB} = \{p_x,p_y\}_{DB} = \frac{1}{2}
[/tex]
By the standard prescription, we can quantize the theory by using
the original Hamiltonian [itex]\hat H := \hat x^2 + \hat y^2[/itex],
together with the commutation relations:
[tex]
[\hat x,\hat y] = [\hat x,\hat p_x]
= [\hat y,\hat p_y] = [\hat p_x,\hat p_y]
= \frac{i\hbar}{2} ~~~~~~(CR)
[/tex]
This is very similar to earlier posts by Rainbow Child and
samalkhaiat (except for some slight differences in signs and
factors).
[Continued in next post...]
 
  • #46
Jostpuur,

Back in your posts #36 and #37, I think you were
applying the Dirac-Bergman constraint quantization
method incorrectly. Here's my attempt at it...

Starting from the Lagrangian
[tex]
L ~=~ \dot x y - z\dot y - x^2 - y^2 ~,
[/tex]
the canonical momenta are
[tex]
p_x ~=~ \frac{\partial L}{\partial \dot x} ~=~ y ~;~~~~
p_y ~=~ \frac{\partial L}{\partial \dot y} ~=~ -x
[/tex]
and the "standard" Hamiltonian is
[tex]
H ~=~ p_x \dot x + p_y \dot y - L ~=~ x^2 + y^2 ~=:~ V(x,y)
[/tex]
Following the Dirac-Bergman method, we have the constraint functions:
[tex]
\phi_1 := p_x - y ~,~~~~ \phi_2 := p_y + x
[/tex]
The standard Poisson bracket is defined by
[tex]
\{F,G\} ~=~ \sum_i \left(
\frac{\partial F}{\partial q_i} \frac{\partial G}{\partial p_i}
- \frac{\partial F}{\partial p_i} \frac{\partial G}{\partial q_i}
\right) ~~~~~~(PB)
[/tex]
So the only non-vanishing Poisson bracket between the constraint
functions is
[tex]
\{\phi_1, \phi_2\}
~=~ \partial_y \phi_1 \partial_{p_y} \phi_2
- \partial_{p_x} \phi_1 \partial_x \phi_2
~=~ -2 ~.
[/tex]
Therefore, [itex]\phi_1[/itex] and [itex]\phi_2[/itex] are
"2nd-class constraints".

To get the Dirac bracket, we need the matrix
[itex]M_{ab} := \{\phi_a, \phi_b\}[/itex] and its inverse...
which here is
[tex]
[M_{ab}] ~:=~
\left(\begin{array}{cc} 0 & -2 \\ 2 & 0 \end{array}\right)
~;~~~~~~
[M_{ab}]^{-1} ~:=~
\frac{1}{2}\left(\begin{array}{cc} 0&1\\ -1&0 \end{array}\right)
[/tex]
and the standard Dirac bracket is then given by
[tex]
\{F,G\}_{DB} ~=~ \{F,G\}
~-~ \sum_{ab} \{F,\phi_a\}~M^{-1}_{ab}~\{\phi_b,G\}
~~~~~~(DB1)
[/tex]
or in our case,
[tex]
\{F,G\}_{DB} ~=~ \{F,G\}
~-~ \frac{1}{2} ~ \left(\{F,\phi_1\},~ \{F,\phi_2\}\right)
\left(\begin{array}{cc} 0&1\\ -1&0 \end{array}\right)
{\{\phi_1,G\} \choose \{\phi_2,G\}}
~~~~~~(DB2)
[/tex]
Writing out some ordinary Poisson brackets between
[itex]x,y,p_x,p_y[/itex] and the constraints, we find only the
following are non-zero:
[tex]
\{x,\phi_1\} = \{y,\phi_2\} = 1 ~;~~~~
\{p_x,\phi_2\} = -1 ~;~~~~ \{p_y,\phi_1\} = 1 ~.
[/tex]
Poisson brackets between [itex]x,y,p_x,p_y[/itex] are
[tex]
\{x,y\} = \{p_x,p_y\} = \{y,p_x\} = \{x,p_y\} = 0 ~;~~~~
\{x,p_x\} = \{y,p_y\} = 1 ~;~~~~
[/tex]
[Continued in next post...]
 
Last edited:
  • #47
(Continuation of post #46...)

Now we can compute the Dirac brackets. I find that only the following
ones are non-zero:
[tex]
\{x,y\}_{DB} = \{x,p_x\}_{DB}
= \{y,p_y\}_{DB} = \{p_x,p_y\}_{DB} = \frac{1}{2}
[/tex]
By the standard prescription, we can quantize the theory by using
the original Hamiltonian [itex]\widehat H := \hat x^2 + \hat y^2[/itex],
together with the commutation relations:
[tex]
[\hat x,\hat y] = [\hat x,\hat p_x]
= [\hat y,\hat p_y] = [\hat p_x,\hat p_y]
= \frac{i\hbar}{2} ~~~~~~(CR)
[/tex]
This is very similar to earlier posts by Rainbow Child and
samalkhaiat (except for some slight differences in signs and
factors).

I.e., the time-evolution of any observable operator
[itex]\hat A = \hat A(\hat x, \hat y)[/itex] in the Hilbert space
for this theory is given by
[tex]
i\hbar ~ \frac{\partial \hat A}{\partial t} ~=~ [\hat A, \hat H] ~.
[/tex]

But all this still bosonic. There's no way to somehow turn the handle
further and extract anti-commutators. (I think you were misinterpreting
RC's remarks about using the Dirac bracket to "get" anti-commutators.
I don't think that's what RC actually meant.) You can't get anti-commutators
from commutators.

Instead, you've got to take the approach that samalkhaiat explained
earlier, and use Grassmann numbers. But then, (as he also explained),
terms like [itex]x^2[/itex] in your Lagrangian are identically zero.
Game over. The Lagrangian cannot represent a "fermionic oscillator".
But neither can it represent a physical bosonic system because it lacks
quadratic momentum (kinetic energy) terms. (This is what samalkhaiat
was trying to emphasize earlier).
 
Last edited:
  • #48
I'm so out of time at the moment! :cry: :frown: I'm forced to delay thinking about those things later.

I'll say one comment on this

strangerep said:
Look at the last paragraphs on p56, before eq(3.96). P&S impose anti-commutation
relations arbitrarily (after going through the usual arguments about how commutators
don't work for fermions). Then again on p58 near eqs(3.101, 3.102). They are
imposing anti-commutation arbitrarily.

So either way, be it with canonical quantization or path integrals, they're
putting in anti-commutation by hand.

(because I had thought about this earlier already.) In the chapter Dirac Field, P&S put anti-commutation relations to the fields only in the quantization. They start the chapter with the classical Dirac field, and there is no mentioning about anti-commuting Grassmann numbers in that context yet. They let the reader assume that the classical Dirac field is [itex]\psi\in\mathbb{C}^4[/itex], and it is only the quantized field that anti-commutes. I'm still trying to keep hopes up for the possibility, that the anti-commuting classical variables would belong only to the path integral quantization, because at the moment it seems the only way this could start making sense.

I used to call the P&S Introduction to the QFT a "bible of QFT", because the proofs are left as matter of "faith". Now when I'm trying to see where the anti-commuting numbers really belong to, I can see that it is also possible to interpret this book in different ways!
 
  • #49
I'll be making progress with this slowly but firmly.

strangerep said:
Jostpuur,

Back in your posts #36 and #37, I think you were
applying the Dirac-Bergman constraint quantization
method incorrectly.

I see the #36 was on completely different track than the Dirac-Bergman method. Although my calculation is probably not total nonsense, because the equations of motion were right in the end... or then it was lucky nonsense?

The #37 was not supposed to be Dirac-Bergman quantization. It was my own regularization attempt. Now I'm keeping hopes up, that the same energy spectrum would also follow with the constraint approach, because then there would be a chance that the regularization was not nonsense.

Here's my attempt at it...

Starting from the Lagrangian
[tex]
L ~=~ \dot x y - z\dot y - x^2 - y^2 ~,
[/tex]
the canonical momenta are
[tex]
p_x ~=~ \frac{\partial L}{\partial \dot x} ~=~ y ~;~~~~
p_y ~=~ \frac{\partial L}{\partial \dot y} ~=~ -x
[/tex]
and the "standard" Hamiltonian is
[tex]
H ~=~ p_x \dot x + p_y \dot y - L ~=~ x^2 + y^2 ~=:~ V(x,y)
[/tex]
Following the Dirac-Bergman method, we have the constraint functions:
[tex]
\phi_1 := p_x - y ~,~~~~ \phi_2 := p_y + x
[/tex]

I see, and the equations of motion are

[tex]
\dot{x}\; =\; \frac{\partial H}{\partial p_x}\; +\; u_1\frac{\partial\phi_1}{\partial p_x}\;+\; u_2 \frac{\partial\phi_2}{\partial p_x} \; =\; u_1,
[/tex]

[tex]
\dot{y}\; =\; \frac{\partial H}{\partial p_y}\; +\; u_1\frac{\partial\phi_1}{\partial p_y}\;+\; u_2 \frac{\partial\phi_2}{\partial p_y} \; =\; u_2,
[/tex]

[tex]
\dot{p}_x\; =\; -\frac{\partial H}{\partial x}\; -\; u_1\frac{\partial\phi_1}{\partial x}\;-\; u_2 \frac{\partial\phi_2}{\partial x} \; =\; -2x -u_2,
[/tex]

[tex]
\dot{p}_y\; =\; -\frac{\partial H}{\partial y}\; -\; u_1\frac{\partial\phi_1}{\partial y}\;-\; u_2 \frac{\partial\phi_2}{\partial y} \; =\; -2y +u_1.
[/tex]

The standard Poisson bracket is defined by
[tex]
\{F,G\} ~=~ \sum_i \left(
\frac{\partial F}{\partial q_i} \frac{\partial G}{\partial p_i}
- \frac{\partial F}{\partial p_i} \frac{\partial G}{\partial q_i}
\right) ~~~~~~(PB)
[/tex]
So the only non-vanishing Poisson bracket between the constraint
functions is
[tex]
\{\phi_1, \phi_2\}
~=~ \partial_y \phi_1 \partial_{p_y} \phi_2
- \partial_{p_x} \phi_1 \partial_x \phi_2
~=~ -2 ~.
[/tex]
Therefore, [itex]\phi_1[/itex] and [itex]\phi_2[/itex] are
"2nd-class constraints".

I don't understand how one could see from this what are second class constraints.

If [itex]g(x,y,p_x,p_y)[/itex] is some function of the coordinates, it has the equation of motion

[tex]
\dot{g}\approx [g,H+u_m\phi_m]
[/tex]

Right now there is

[tex]
\frac{\partial}{\partial x}(u_m\phi_m) \approx u_2,\quad \frac{\partial}{\partial y}(u_m\phi_m) \approx -u_1,\quad \frac{\partial}{\partial p_x}(u_m\phi_m)\approx u_1,\quad \frac{\partial}{\partial p_y}(u_m\phi_m)\approx u_2
[/tex]

so

[tex]
\dot{g}\approx \frac{\partial g}{\partial x} u_1 + \frac{\partial g}{\partial y} u_2 - \frac{\partial g}{\partial p_x}(u_2 + 2x) + \frac{\partial g}{\partial p_y}(u_1-2y)
[/tex]

If I substitute [itex]g=\phi_1[/itex] and [itex]g=\phi_2[/itex], I get

[tex]
\dot{\phi}_1 \approx 2(u_2+x)
[/tex]

[tex]
\dot{\phi}_2 \approx 2(u_1-y).
[/tex]

Am I now correct to say that

[tex]
p_x - y = 0,\quad p_y + x = 0
[/tex]

are the primary constraints, and

[tex]
u_2+x = 0,\quad u_1-y = 0
[/tex]

are the secondary constraints?

About 2/3 of math in #46 & #47 still ahead...
 
  • #50
Oh, how confusing. Only now I noticed that the EOM in #36 and #49 were exactly the same! Well, I derived them from completely different starting point, at least. I'm not sure if that was coincidental... In #36 I merely calculated the usual Hamilton's equations, starting with the modified H, and applied the constraint condition in the end. In #49 I started with the equations with the Lagrange's multipliers, in the way it was supposed to be done.

The SE in the bottom of #36 was nonsense at least.
 
  • #51
jostpuur said:
In the chapter Dirac Field, P&S put anti-commutation
relations to the fields only in the quantization. They start
the chapter with the classical Dirac field, and there is no
mentioning about anti-commuting Grassmann numbers in that
context yet. They let the reader assume that the classical
Dirac field is [itex]\psi\in\mathbb{C}^4[/itex] and it is
only the quantized field that anti-commutes.

Actually, in that chapter they start by *attempting* to
quantize the classical field, and find that it doesn't work.
Then they assume anti-commutation instead. This doesn't
really correspond to any process of "quantization" unless
you take the classical variables as anti-commuting in the
first place.

I'm still trying to keep hopes up for the
possibility, that the anti-commuting classical variables
would belong only to the path integral quantization, because
at the moment it seems the only way this could start making
sense.
Quantization does not make rigorous sense. The passage from
classical to quantum is ill-defined guesswork. It's better
to think of the quantum theory and then see that a limit
as [itex]\hbar\to 0[/itex] gives a sensible classical theory.

I used to call the P&S Introduction to the QFT a
"bible of QFT", because the proofs are left as matter of"faith".
Huh? I've always been able to follow their proofs. But
nobody claims quantization is a "proof". So even though it's
a dark art, you don't have to "believe" in it without
evidence. The "proofs" are in whether it works
experimentally.
 
  • #52
jostpuur said:
[...] and the equations of motion are
[tex]
\dot{x}\; =\; \frac{\partial H}{\partial p_x}\; +\;
u_1\frac{\partial\phi_1}{\partial p_x}\;+\; u_2
\frac{\partial\phi_2}{\partial p_x} \; =\; u_1,
[/tex]
[...]

I note that you don't have much time, but I think you need
to re-study the Wiki page pen-in-hand. (I.e.,
http://en.wikipedia.org/wiki/Dirac_bracket). Don't just
skim-read it.

The point of the Dirac bracket is that, at the end of the
procedure, you can continue to use the original eqns
of motion (no u's), provided you use the Dirac bracket in
place of the Poisson bracket. The Dirac bracket respects the
constraints, unlike the Poisson bracket.

[tex]
\{\phi_1, \phi_2\}_{PB}
~=~ \partial_y \phi_1 \partial_{p_y} \phi_2
- \partial_{p_x} \phi_1 \partial_x \phi_2
~=~ -2 ~.
[/tex]
I don't understand how one could see from this what are second class
constraints.

Again, study the Wiki page when you're not rushed for time.

Any phase-space function f(q,p) is called "first class" if its Poisson
bracket with all of the constraints weakly vanishes, that is,
[itex]\{f, \phi_j\}_{PB} \approx 0, \forall j[/itex]. So a constraint
[itex]\phi_i[/itex] is called first class if its PB with all
the other constraints vanishes weakly (i.e: becomes 0 when you
set all the [itex]\phi[/itex]'s to 0). Since the PB above is -2,
it doesn't vanish weakly, hence the constraints themselves
are "second class" in this case.

[...] Am I now correct to say that

[tex]
u_2+x = 0,\quad u_1-y = 0
[/tex]

are the secondary constraints?

No. Look at points 1-4 in the "Consistency conditions" section
of the Wiki page. From point 3, "secondary constraints" do not
involve the [itex]u_k[/itex].

Rather, the above correspond to Wiki's point 4 (equations that
help determine the [itex]u_k[/itex]).

But in your case, you need not muck around with the
[itex]u_k[/itex] too much. You can just jump from the PB
of constraints to the matrix [itex]M_{ab}[/itex], which is
the crucial thing needed to write down the Dirac brackets.
That's what I did in my earlier post.

About 2/3 of math in #46 & #47 still ahead...
Probably better to study all of it thoroughly, together with
Wiki, before attempting a reply.
 
  • #53
strangerep said:
No. Look at points 1-4 in the "Consistency conditions" section
of the Wiki page. From point 3, "secondary constraints" do not
involve the [itex]u_k[/itex].

Ok, I made a mistake. I have the Dirac's lecture notes now, and tried to read it from there. He talks about different kind of equations, and then says that one of those are called secondary constraints, and I simply made a mistake when interpreting about what kind he was talking.
 
  • #54
strangerep said:
Any phase-space function f(q,p) is called "first class" if its Poisson
bracket with all of the constraints weakly vanishes, that is,
[itex]\{f, \phi_j\}_{PB} \approx 0, \forall j[/itex]. So a constraint
[itex]\phi_i[/itex] is called first class if its PB with all
the other constraints vanishes weakly (i.e: becomes 0 when you
set all the [itex]\phi[/itex]'s to 0). Since the PB above is -2,
it doesn't vanish weakly, hence the constraints themselves
are "second class" in this case.

So functions being first class or second class are different thing from primary and secondary constraints?

A second try:

[tex]
p_x - y = 0,\quad p_y + x = 0
[/tex]

are the primary constraints, and

[tex]
u_2+x = 0,\quad u_1-y = 0
[/tex]

are consistency conditions involving u's with no better name?
 
  • #55
jostpuur said:
So functions being first class or second class are different thing from primary and secondary constraints?
Yes.

[tex]
p_x - y = 0,\quad p_y + x = 0
[/tex]
are the primary constraints,
Yes.

and
[tex]
u_2+x = 0,\quad u_1-y = 0
[/tex]
are consistency conditions involving u's with no better name?
Yes.
 
  • #56
More thoughts on multiplying:

It would not make sense to say that the nature of [itex]\mathbb{R}^3[/itex] is such, that the product [itex]\mathbb{R}^3\times\mathbb{R}^3\to\mathbb{R}^3[/itex] is given by the cross product [itex](x_1,x_2)\mapsto x_1\times x_2[/itex]. We can define what ever products on [itex]\mathbb{R}^3[/itex] we want, and different products could have different applications, all correct for different things. If you don't know what you want to calculate, then none of the products would be correct.

Similarly, it doesn't make sense to say that the Nature is of such kind, that the product of the classical Dirac field is given by the anti-commuting Grassmann product. I could even define my own product [itex](E_1,E_2)\mapsto E_1 E_2[/itex] for the electric field, with no difficulty! So the real question is, that for what purpose do we want the anti-commuting Grassmann product?

strangerep said:
Actually, in that chapter they start by *attempting* to quantize the classical field, and find that it doesn't work.

I would have been surprised if the same operators would have worked for the Dirac field, that worked for the Klein-Gordon field, since the Dirac field has so different Lagrange's function. It would not have been proper quantizing. You don't quantize the one dimensional infinite square well by stealing operators from harmonic oscillator either!

Then they assume anti-commutation instead.

For the operators. There is no clear mentioning about anti-commuting classical variables in this context.

This doesn't really correspond to any process of "quantization" unless you take the classical variables as anti-commuting in the first place.

I'm not arguing against this, and not believing either. I must know what would happen to the operators if classical variables were not anti-commuting.
 
  • #57
strangerep said:
By the standard prescription, we can quantize the theory by using
the original Hamiltonian [itex]\widehat H := \hat x^2 + \hat y^2[/itex],
together with the commutation relations:
[tex]
[\hat x,\hat y] = [\hat x,\hat p_x]
= [\hat y,\hat p_y] = [\hat p_x,\hat p_y]
= \frac{i\hbar}{2} ~~~~~~(CR)
[/tex]

On page 34 Dirac says

We further impose certain supplementary conditions on the wave function, namely:
[tex]
\phi_j\psi=0.
[/tex]

I suppose the motivation behind this is, that this way the classical limit will respect the original constraints.

It is so easy to write

[tex]
(\hat{p}_x - \hat{y})\psi = 0
[/tex]

[tex]
(\hat{p}_y + \hat{x})\psi = 0,
[/tex]

but can one do with these? Would the next step be to solve some explicit representations for these operators? It seems a difficult task, with so strange commutation relations between them.
 
  • #58
Or then it is not so difficult. For example

[tex]
\hat{x} = x + i\hbar\partial_y
[/tex]

[tex]
\hat{y} = y + \frac{i\hbar}{2}\partial_x
[/tex]

[tex]
\hat{p}_x = -\frac{1}{2}y - i\hbar\partial_x
[/tex]

[tex]
\hat{p}_y = -x - i\hbar\partial_y
[/tex]

have these commutation relations, but this is not the only possible choice.
 
  • #59
hmhmhmhmh... it would be a Schrödinger's equation

[tex]
i\hbar\partial_t \psi \;=\; \big(x^2 \;+\; y^2 \;+\; i\hbar(2x\partial_y \;+\; \frac{1}{2}y\partial_x) \;-\; \hbar^2(\frac{1}{4}\partial^2_x \;+ \;\partial_y^2)\big)\psi
[/tex]

with a supplementary condition

[tex]
(y+i\hbar\partial_x)\psi = 0
[/tex]

then?
 
  • #60
jostpuur said:
It would not make sense to say that the nature of [itex]\mathbb{R}^3[/itex] is such, that the product [itex]\mathbb{R}^3\times\mathbb{R}^3\to\mathbb{R}^3[/itex] is given by the cross product [itex](x_1,x_2)\mapsto x_1\times x_2[/itex]. We can define what ever products on [itex]\mathbb{R}^3[/itex] we want, and different products could have different applications, all correct for different things. If you don't know what you want to calculate, then none of the products would be correct.

Right. [itex]\mathbb{R}^3[/itex] is just a representation space which can carry various
algebras. The usual cross product of vectors corresponds to the Lie algebra o(3). The
fundamental thing in any model of physical phenomena is the abstract algebra
underlying it. One can then construct concrete representations of this algebra on various
representation spaces.

The confusing thing about [itex]\mathbb{R}^3[/itex] and o(3) is that there's an
isomorphism between them, so one tends to think of them as the same thing. But
that temptation should be resisted. First choose the abstract algebra, then decide
what representation space is most convenient for calculations.

Similarly, it doesn't make sense to say that the Nature is of such kind, that the product of the classical Dirac field is given by the anti-commuting Grassmann product. I could even define my own product [itex](E_1,E_2)\mapsto E_1 E_2[/itex] for the electric field, with no difficulty!
In general, one must show that the algebra is closed. In the simple case above, it means
that all such products must be in the original algebra, which is easy enough for the
simple commutative algebra above. But if one writes down a non-commuting algebra,
one must show that [itex][A,B][/itex] is also in the original algebra, i.e., that if [itex][A,B]=C[/itex]
then [itex]C[/itex] is in the original algebra. That's part of the definition of a Lie algebra,
i.e., for any [itex]A,B[/itex] in the algebra, the commutator [itex][A,B][/itex] is equal to
a linear combination of the basis elements of the algebra.

So the real question is, that for what purpose do we want the anti-commuting Grassmann product?
Because any theory of electrons must be wrong unless the Pauli exclusion principle
is in there somewhere. That means we need an algebra such that [itex]A^2 = B^2 = 0[/itex],
etc, etc. Now, given a collection of algebra elements that all square to zero, we can
take a linear combinations of these, e.g., [itex]E=A+B[/itex] and to get [itex]E^2 = 0[/itex]
we must have [itex]AB + BA = 0[/itex]. I.e., if we want the Pauli exclusion principle, together
with symmetry transformations that mix the algebra elements while continuing to respect
the Pauli principle, it is simpler just to start from a Grassman algebra where [itex]AB + BA = 0[/itex]
and then [itex]A^2 = 0[/itex] becomes a special case.

I would have been surprised if the same operators would have worked
for the Dirac field, that worked for the Klein-Gordon field, since the
Dirac field has so different Lagrange's function. It would not have
been proper quantizing. You don't quantize the one dimensional
infinite square well by stealing operators from harmonic oscillator
either!
They're not using "the same operators that worked for the K-G field".
They're (attempting to) use the same prescription based on a
correspondence between Poisson brackets of functions on phase space,
and commutators of operators on Hilbert space. They find
that commutators don't work, and resort to anti-commutators. So in the
step between classical phase space and Hilbert space, they've
implicitly introduced a Grassman algebra even though they don't use
that name until much later in the book. The crucial point is that
the anti-commutativity is introduced before the correct Hilbert space
is constructed.

I must know what would happen to the operators if classical
variables were not anti-commuting.
You get a theory of electrons without the Pauli exclusion principle,
and without strictly positive energy. Such a theory is wrong.
 
  • #61
jostpuur said:
[...] this way the classical limit will respect the original constraints.
Wait,... let's go back to what I said in my previous post about algebras.
In (advanced) classical mechanics one works with functions over phase space,
e.g. f(p,q), g(p,q), etc. The Lagrangian action is such a function, and its
extremum gives the classical equation of motion through phase space.
The Hamiltonian is another such function.

The Hamiltonian formulation of such dynamics gives rise to the Poisson
bracket because we want any transformation of phase space functions to
leave the form of the Hamilton equations unchanged. Such transformations
form a group (a symplectic group) whose Lie algebra is expressed by the
Poisson bracket. I.e., we have an infinite-dimensional Lie algebra, consisting
of the set of functions f(p,q), g(p,q), etc, etc, all of whose Poisson brackets with
each other yield a function which is itself in the set. That's the important
thing - the product expressed by the Poisson bracket must close on the algebra.

For well-behaved cases (where the Poisson brackets close on the algebra),
quantization can then proceed by taking this Lie algebra and representing
it via operators on Hilbert space. For the ill-behaved cases with constraints,
the Poisson brackets don't close on the algebra, so we cannot yet perform
this quantization step. See below.

It is so easy to write

[tex]
(\hat{p}_x - \hat{y})\psi = 0
[/tex]

[tex]
(\hat{p}_y + \hat{x})\psi = 0,
[/tex]

but can one do with these? Would the next step be to solve some explicit representations for these operators?
No. We need a valid Lie algebra first. There's no point
trying to find a representation for an ill-defined algebra.

Suppose we have two functions f(p,q) and g(p,q) which satisfy the equations
of motion, and also respect the constraints. The crucial point is that
it is not automatic that [itex]h(p,q) := \{f,g\}_{PB}[/itex] will
also satisfy the constraints. If h(p,q) doesn't satisfy the constraints,
we do not have a closed algebra, and therefore it's useless. We need
a closed Lie algebra. That's the whole point behind modifying
the Poisson bracket into the Dirac-Bergmann bracket. A function
[itex]b(p,q) := \{f,g\}_{DB}~~[/itex] does satisfy the constraints
and therefore gives a closed algebra which we can proceed to
represent sensibly on a Hilbert space.
 
Last edited:
  • #62
I'm getting down to simpler questions: So classical Dirac field is not a map [tex]\mathbb{R}^3\to\mathbb{C}^4[/tex], but instead a map [tex]\mathbb{R}^3\to X[/tex], where X is some Grassmann algebra. Now... what is X? Is it a set? If it is, is there a definition for it, so that I could understand what it is? There exists lot of different Grassmann algebras, so that the information that X is a Grassmann algebra alone does not yet answer my question.
 
Last edited:
  • #63
jostpuur said:
[...]: So classical Dirac field is not a map [tex]\mathbb{R}^3\to\mathbb{C}^4[/tex], but instead a map [tex]\mathbb{R}^3\to X[/tex], where X is some Grassmann algebra. Now... what is X? Is it a set?
Not sure I understand the question. Any algebra is a set -- together with various operations
that map elements of the set amongst themselves.

If it is, is there a definition for it, so that I could understand what it is? There exists lot of different Grassmann algebras, so that the information that X is a Grassmann algebra alone does not yet answer my question.
Again, the algebra is just as described in Peskin & Schroeder section 9.5, especially
pp299-301. On p299, think of their [itex]\theta,\eta[/itex] as corresponding to basis elements
(spin-up, and spin-down, say). Taking linear combinations of these basis elements (i.e., multiply
them by complex scalars, e.g., [itex]\lambda := A\theta + B\eta[/itex], where A,B are complex),
is enough to represent (massless) neutrinos. Let's call the space of all these combinations
"[itex]V[/itex]". To get a massive Dirac field, one must recognize that taking the complex
conjugate of the above results in an inequivalent algebra [itex]\bar V[/itex](since they're not
related by a similarity transformation -- you can't get to [itex]\bar\lambda[/itex] via a transformation
like [itex]\omega\lambda\omega^{-1}[/itex]). The Dirac field is then just a direct sum of these
two inequivalent algebras. This is related to the stuff on p300 of P&S where
they introduce complex Grassman numbers, starting just before eq(9.65).
 
Last edited:
  • #64
strangerep said:
On p299, think of their [itex]\theta,\eta[/itex] as corresponding to basis elements
(spin-up, and spin-down, say). Taking linear combinations of these basis elements (i.e., multiply
them by complex scalars, e.g., [itex]\lambda := A\theta + B\eta[/itex], where A,B are complex),
is enough to represent (massless) neutrinos.

I didn't understand [itex]\theta,\eta[/itex] were supposed to be considered as some fixed basis elements. I though they are some arbitrary variables [itex]\theta,\eta\in X[/itex] belonging to some set (and I'm now trying to figure out what the set X is). However, when they write expressions like

[tex]
\int d\theta\; f(\theta)
[/tex]

it sure doesn't look like [itex]\theta[/itex] is some basis element. It looks like a variable that goes through some domain of different values. I mean, if the [itex]\theta[/itex] is some fixed element, then the integral is as absurd as

[tex]
\int d4\; f(4)
[/tex]
 
Last edited:
  • #65
Here, how to make given numbers grassmann, I gave a construction that makes the set [tex]\mathbb{R}[/tex] anti-commuting. Is that construction completely disconnected from the Grassmann algebras we actually need in physics?
 
Last edited:
  • #66
jostpuur said:
I didn't understand [itex]\theta,\eta[/itex] were supposed to be considered as some fixed basis elements. I though they are some arbitrary variables [itex]\theta,\eta\in X[/itex] belonging to some set (and I'm now trying to figure out what the set X is).
Look at P&S pp301-302. Take eq(9.71):

[tex]
\psi(x) ~=~ \sum_i \psi_i \phi_i(x) ~.
[/tex]

Which are the "basis" elements? The [itex]\psi_i[/itex] or the [itex]\phi_i(x)[/itex]?
The answer depends which space you're focussing on --
the Grassmann values or the spacetime manifold. But what really
matters is the Grassmann-valued field on the LHS.

However, when they write expressions like

[tex]
\int d\theta\; f(\theta)
[/tex]

it sure doesn't look like [itex]\theta[/itex] is some basis element. It looks like
a variable that goes through some domain of different values. [...]
The purpose of these Grassmann integrals is to define
functional integrals for fermionic fields. (See P&S's unnumbered eqn at the
top of page 302.)

Here, how to make given numbers grassmann, I gave a construction that makes the set R
anti-commuting. Is that construction completely disconnected from the Grassmann algebras we actually need in physics?
I didn't have time to follow your construction carefully, so I'll just say that
what really matters are the abstract algebraic rules, not how you represent them.
 
Last edited:
  • #67
strangerep said:
I didn't have time to follow your construction carefully, so I'll just say that
what really matters are the abstract algebraic rules, not how you represent them.

Unfortunately the mere knowledge of anti-commutation does not fix the construction up to any reasonable isomorphism, as my example in the linear algebra sub-forum shows, because it probably isn't anything that we need with the fermions now. Actually my construction was not an algebra according to the definition of algebra in mathematics... I should have noticed it... but it did have anti-commuting numbers at least!

We can define one three dimensional algebra like this. Set multiplications of the basis elements to be

(1,0,0)(1,0,0)=0
(1,0,0)(0,1,0)=(0,0,1)
(1,0,0)(0,0,1)=0
(0,1,0)(1,0,0)=-(0,0,1)
(0,1,0)(0,1,0)=0
(0,1,0)(0,0,1)=0
(0,0,1)(1,0,0)=0
(0,0,1)(0,1,0)=0
(0,0,1)(0,0,1)=0

and we get a bilinear mapping [tex]\cdot:\mathbb{R}^3\times\mathbb{R}^3\to\mathbb{R}^3[/tex], which makes [tex](\mathbb{R}^3,\cdot)[/tex] an algebra. If we then notate

[tex]
\theta:=(1,0,0),\quad \eta:=(0,1,0),
[/tex]

we can start calculating according to the rules

[tex]
\theta\eta = -\eta\theta,\quad \alpha(\theta\eta)=(\alpha\theta)\eta\quad\alpha\in\mathbb{R}
[/tex]

and so on...

Is this the kind of thing we need with fermions?
 
  • #68
jostpuur said:
Unfortunately the mere knowledge of anti-commutation does not
fix the construction up to any reasonable isomorphism, [...]
I think you mean "representation" rather than "construction". (You're devising a
concrete representation of an abstract algebra.) If one representation has different
properties than another, then some other algebraic item(s) have been introduced
somewhere.

[...]Is this the kind of thing we need with fermions?
Most people seem to get by ok using canonical anti-commutations relations
(or abstract Grassman algebras) directly. I still don't really know where
you're trying to go with all this.

BTW, "exterior algebras" are a well-known case of Grassman algebras.
The Wiki page for the latter even redirects to the former.
 
  • #69
strangerep said:
Most people seem to get by ok using canonical anti-commutations relations
(or abstract Grassman algebras) directly. I still don't really know where
you're trying to go with all this.

I am only trying to understand what P&S are talking about, and I'm still not fully convinced that it is like

[tex]
\theta:=(1,0,0),\quad\eta:=(0,1,0)
[/tex]

in my previous post, because it seems extremely strange to use notation

[tex]
\int d\theta\;f(\theta) = \int d(1,0,0)\; f(1,0,0)
[/tex]

for anything.

In fact now it would make tons of sense to define integrals like

[tex]
\int\limits_{\gamma} d\gamma\;f(\gamma) = \lim_{N\to\infty}\sum_{k=0}^N (\gamma(t_{k+1}) - \gamma(t_k))f(\gamma(t_k))
[/tex]

where [tex]\gamma:[a,b]\to\mathbb{R}^3[/tex] is some path, and where we use the Grassmann multiplication

[tex]
\big(\gamma(t_{k+1}) - \gamma(t_k),\; f(\gamma(t_k))\big)\mapsto (\gamma(t_{k+1}) - \gamma(t_k))f(\gamma(t_k)).
[/tex]

For example with

[tex]
f(x_1,x_2,x_3) = (0,x_1^2,0)
[/tex]

and

[tex]
\gamma(t) = (t,0,0),\quad 0\leq t\leq L
[/tex]

the integral would be

[tex]
\int\limits_{\gamma} d\gamma\; f(\gamma) = (0,0,\frac{1}{3}L^3).
[/tex]

I'm sure this is one good definition for the Grassmann integration, but I cannot know if this is the kind that we are supposed to have.
 
Last edited:
  • #70
strangerep said:
I think you mean "representation" rather than "construction". (You're devising a
concrete representation of an abstract algebra.)

I was careful to use word "construction", because the thing I defined in the linear algebra subforum was not an algebra. It was something else, but had something anti-commuting.
 

Similar threads

Replies
3
Views
404
Replies
2
Views
576
Replies
1
Views
791
Replies
14
Views
1K
  • Differential Equations
Replies
5
Views
653
  • Quantum Physics
Replies
4
Views
868
Replies
14
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
876
  • Quantum Physics
Replies
6
Views
814
Replies
6
Views
1K
Back
Top